I’ve been curious about running local LLMs for development tasks — code review, summarisation, drafting — without relying on cloud APIs. Ollama makes this straightforward to set up, but most benchmarks assume beefy hardware. I wanted to know: what’s the experience like on a standard development machine?
My setup: a laptop with a 12th-gen Intel i7, 16GB RAM, integrated graphics. No discrete GPU. This is the kind of machine most developers actually use.
I wanted to understand how AI-powered search tools — things like ChatGPT browsing, Perplexity, and Google’s AI Overviews — actually crawl and interpret a static site. Most SEO advice is still written for traditional search engines. Does structured data actually help AI crawlers, or are they just parsing the raw HTML?
The question: if I add JSON-LD schema markup to a simple Hugo site, does it measurably change how AI tools understand and reference the content?
The way people find information is changing. Traditional search engines still matter, but AI-powered tools — ChatGPT browsing, Perplexity, Google AI Overviews, Claude’s web access — are becoming a significant discovery channel. If your content is well-structured and easy for these systems to parse, you’re more likely to be cited, referenced, and surfaced.