Why I Let AI Write My Blog (Mostly)
I got tired of having more ideas than published posts.
Typical loop. I collect article ideas in Notion, get picky, overthink structure, then ship nothing for weeks. Meanwhile, my RSS feed is full of people shipping weekly without looking completely burned out.
So I did what every developer does when they feel guilty about not writing. I automated it.
For three months I ran a semi-automatic publishing workflow for richardlemon.com. AI did the heavy lifting. I edited, guided, and watched the numbers.
This is not a victory lap. Some posts flopped. Some quietly pulled in traffic with almost no effort. The interesting part is the pattern that showed up in the data, and how the workflow evolved under pressure.
The Setup: Constraints First, Tech Second
I started with constraints, not models.
I wanted:
- 2 posts per week, for 12 weeks.
- Max 60 minutes of my time per post.
- No content that felt like generic SEO soup.
- Everything in Git, same as the rest of the site.
The stack ended up like this:
- Static site: Next.js, MDX, deployed on Vercel.
- Content store: GitHub repo with
content/posts/*.mdx. - Scheduling / jobs: GitHub Actions + a tiny cron worker.
- Generation: OpenAI API wrapped in a Node script.
- Analytics: Plausible for traffic metrics.
Nothing revolutionary. Just enough pieces wired together so I could focus on prompts and editing instead of copy-pasting into a CMS.
The Workflow: From Idea To Published Without Tab Hell
The real problem was not writing. It was context switching. Docs, editor, browser, CMS, image tools. Too much friction.
I wanted one entry point. A plain text file that described what I wanted, then a script that handled the rest.
1. Ideas Live In A Single File
I keep a ideas.md in the repo:
# Backlog
- 3 months of AI-generated posts: what the data says [priority: high]
- What baseball coaching taught me about refactoring
- Building a glucose dashboard with cheap sensors
# Drafting
- Why I think "learn to code" is bad advice
Each line is a future article. When I commit something under # Drafting, it becomes eligible for generation.
2. A Tiny Spec Per Post
Next to each idea I add a minimal spec file under specs/, for example ai-posts-3-months.yaml:
slug: 3-months-of-ai-generated-blog-posts-what-the-data-says
word_count: 1300-1500
tone: direct, first-person, opinionated
angle: meta build-in-public, document the automation workflow
must_include:
- hard numbers from analytics
- what the workflow actually looks like
- what failed
No long brief. Just guardrails. This file is what the AI sees, not my entire brain dump in Notion.
3. One Command To Generate
The core script is a plain Node CLI:
pnpm generate-post --spec specs/ai-posts-3-months.yamlRoughly, it does this:
- Reads the spec.
- Builds a system prompt with my style rules.
- Sends it to the model.
- Validates that the response is valid MDX with frontmatter.
- Writes
content/posts/[slug].mdx.
The system prompt matters more than people like to admit. Mine includes things like:
- No long paragraphs, keep 2–3 lines max.
- First-person, I actually did this.
- No fake numbers, leave "TODO" placeholders instead.
- No fluff phrases I personally hate.
The last one is key. If I do not block my banned phrases, I end up rewriting half the post anyway.
4. Edit Like A Developer, Not A Copywriter
Generated posts land as draft MDX files. No CMS. I open them in VS Code and work like on any other file.
My edit pass is lightweight, because I budget 60 minutes total:
- Replace all TODO metrics with real numbers from Plausible.
- Delete generic intros and swap with one blunt paragraph.
- Inject real details: tools, commands, file paths, mistakes.
- Fix headings that sound like LinkedIn carousel slides.
The rule: if I am rewriting more than 30% of the content, the prompt failed, not the model. Then I update the spec or the system prompt before generating the next one.
5. Ship By Pull Request, Not By Vibes
Every post is a branch and a pull request. The PR template asks me three things:
- Did I read the entire article end to end?
- Does it contain specific details that only I would know?
- Would I send this to a friend without apologising?
When the PR merges, GitHub Actions rebuilds and deploys. There is no "publish" button. If the build passes, the post is live.
The Numbers: Traffic, Engagement, And Some Surprises
Three months, 2 posts per week, 24 posts total. Of those, 19 were AI-assisted. 5 were fully hand written because I did not trust AI for those topics.
Baseline Traffic Before The Experiment
Before starting this workflow, the site had a small but steady stream of traffic:
- Sessions per month: ~480
- Pages per session: 1.3
- Average time on page: 1:10
- New posts: basically zero for two months
So any change in traffic is more or less content volume plus whatever SEO magic shows up from actually publishing.
Month 1: Volume Without Personality
First month I went fast. Minimal editing. Mostly trusted the model.
- Posts published: 8
- Sessions: 910
- Average time on page: 0:58
- Top post: a technical workflow article with specific code blocks
Traffic almost doubled. Engagement did not. Time on page dropped, and scroll depth looked shallow on most posts.
The pattern was obvious from reading: posts sounded like a slightly opinionated documentation page, not like something written by a person who had actually broken production on a Friday night.
Month 2: More Edits, Fewer Posts
In month two I slowed down and edited harder.
- Posts published: 6
- Sessions: 1,120
- Average time on page: 1:24
- Bounce rate dropped by ~9%
I cut a lot of generic "why this matters" paragraphs. I also added real metrics, filesystem paths, and even a couple of flat-out "this approach sucked" sections.
The interesting part. Fewer posts, more traffic, better engagement. The posts that did well were almost always the ones where I injected something embarrassingly specific, like a screenshot description of a failing GitHub Action or a mistake I made in the API prompt.
Month 3: Blending AI Drafts With Manual Posts
By month three the backlog had a mix of "AI can help" topics and "I need to hand write this" topics.
- Posts published: 10
- Sessions: 1,540
- Average time on page: 1:32
- Two posts started ranking for long-tail queries I did not even target explicitly
The funny part. The highest performing post that month was about a baseball coaching drill that maps to debugging. I wrote that one almost entirely by hand, only using AI to help with structure.
Second place was an AI-assisted piece where I swapped out half the generic bits with log output from a broken Cloudflare Worker.
The common trait was obvious in hindsight. Specificity, not polish.
What Actually Worked (And What Did Not)
W: AI As Structure, Not Story
Where AI shined for me was structure. Headings, section order, making sure I did not forget to explain a dependency that appeared halfway down the page.
Where it failed was story. If I let the model narrate why something mattered, it defaulted to generic takes. So I started treating AI sections like scaffolding. Keep the outline. Replace the soul.
W: Style Rules Baked Into The System Prompt
Most people tweak prompts at the user level. I think this is backwards for recurring work.
I pushed the non-negotiables into the system prompt:
- No 800 word intros. Get to the point in 2 short paragraphs.
- Assume reader is a developer, not a beginner.
- Always use real tools, libraries, and commands. No invented names.
- If you do not know a number, write "[[METRIC_HERE]]" so I can replace it.
This cut my edit time almost in half by the third week. Bad content was now obviously wrong at the outline stage instead of sneaking into the final piece.
W: Git-First Content
Keeping everything in Git was huge. I could:
- Review diffs like code.
- Revert a section that became too "content marketing" heavy.
- Track how my prompts changed article quality over time.
Also, no vendor lock-in. If I decide to switch models or providers, the content stays plain MDX. Which is the only thing I trust long term.
F: Treating AI As A Black Box Copywriter
The first week I fell into the trap of "let the AI write, I will just tweak a few sentences".
That does not work. I either:
- Spent 90 minutes rewriting half the post, or
- Shipped something that felt like it came out of a content farm
I stopped thinking of it as a copywriter. It is a fast collaborator that is very bad at being me, but very good at proposing structure, edge cases, and phrasing alternatives when I am stuck.
F: Chasing Keywords
I tried one small batch of posts where I let the model target specific keywords more aggressively. Those posts technically ranked for a few long-tail phrases. They also had the worst engagement.
My audience is mostly developers who can smell SEO sludge instantly. I think chasing keywords for a niche personal site is a distraction. Better to write specific stories and let organic queries find them over time.
How The Automation Changed My Writing Habit
The main benefit was not traffic. It was breaking the perfectionist bottleneck.
Once I had a script that could create a "good enough" draft in 30 seconds, the cost of starting a new article dropped to almost zero. I stopped hoarding ideas. I started testing them.
My weekly rhythm looked like this by month three:
- Monday: pick 2 ideas, generate drafts, 30–40 minutes of editing each.
- Tuesday: merge PRs, deploy, share one post.
- Thursday: ship the second post.
- Friday: check Plausible, tag posts that are picking up traction.
That is it. No calendars, no complicated content strategy documents, no "ideal customer persona" slides.
What I Would Change For The Next 3 Months
Three months was enough to see the rough shape of this system. If I keep going, I will change a few things.
More Templates, Fewer One-Off Prompts
I noticed that I naturally gravitate to three types of posts:
- Build logs: here is a thing I built, step by step.
- Postmortems: here is a thing that broke, and why.
- Philosophy-in-practice: here is a belief I have, backed by real experiments.
Each of these deserves its own template and system prompt variant. Right now I am still trying to make one generic style work for all of them, which creates friction.
Tighter Feedback Loop Between Analytics And Prompts
I am not using analytics data enough yet. The obvious next step is a small script that:
- Pulls top performing posts from Plausible by time on page.
- Extracts heading patterns and length.
- Feeds that back into the generation hints.
Not to create a formula. Just to keep myself honest about what my actual readers spend time on, instead of what I think they like.
Partial Generation Instead Of Full Drafts
Some posts do not need full generation. They need help on specific pieces.
I will probably split the CLI into:
generate-outline: only headings and rough bullet points.generate-section: flesh out a single section I am stuck on.generate-full: the current behaviour.
This aligns better with how I already write manually. Short focused pushes, not monolithic sessions.
Is AI Worth It For A Personal Dev Blog?
For me, yes. With one big condition.
AI has made it cheaper for me to ship more often without burning out. That alone is valuable. The traffic bump is nice, but not the main reason I keep the workflow around.
The condition is simple. The posts that work best still feel human. They have scars. They have bad decisions and specific tools and awkward numbers that do not round nicely.
If I let the model smooth all of that out, I end up with something forgettable, even if it ranks for a few keywords.
So I treat the workflow like any other automation in my stack. It handles the repetitive parts: structure, formatting, boilerplate. I keep the weird, sharp, honest parts for myself.
Three months of data says that mix works better than I expected. Not magic. Just enough leverage to make writing feel like building again instead of homework.
Subscribe to my newsletter to get the latest updates and news
Member discussion