Building with GPT-5: What Actually Works

After months of experimentation, here's what I've learned about building with AI tools. Real patterns, real failures, real insights.

FormatLab
Read Time8
Tools
gpt-5
G
claude
C
cursor
C
midjourney
M
Building with GPT-5: What Actually Works

After months of building products with GPT-5 and other large language models, I've accumulated a lot of learnings about what actually works. This isn't theory—it's from real experiments, real failures, and real products.

The Core Insight

AI tools are force multipliers, not replacements. The builders who are succeeding aren't the ones trying to automate everything—they're the ones using AI to amplify their existing skills.

Here's the pattern I see working:

  1. Human sets the direction and quality bar
  2. AI generates first drafts and handles grunt work
  3. Human refines, curates, and adds the "soul"

This creates output that's 10x faster but still distinctly human.

What Actually Works

1. Structured Prompting

The single biggest improvement I've seen in AI output comes from structured prompts. Instead of:

"Write a blog post about SEO"

Use:

"Write a blog post about SEO for solo founders. Include:

  • 3 actionable tactics they can implement today
  • Common mistakes to avoid
  • One contrarian take based on real data Tone: Direct, practical, no fluff"

The difference in output quality is dramatic.

2. Iteration Over Perfection

The fastest way to get great output:

  1. Generate 3-5 variations
  2. Cherry-pick the best elements from each
  3. Combine and refine
  4. Human edit for voice and accuracy

Trying to get a perfect output on the first try is slower than iterating.

3. Context Windows Are Your Friend

With GPT-5's expanded context window, you can now:

  • Feed in your entire writing style guide
  • Include examples of your best work
  • Provide detailed brand guidelines

The AI learns your voice from examples much better than from descriptions.

What Doesn't Work

Fully Automated Content

Every attempt I've made at fully automated content pipelines has produced mediocre results. The content looks fine at first glance but lacks:

  • Original insights
  • Personal experience
  • Controversial takes
  • The "soul" that makes content shareable

Human curation remains essential.

Complex Reasoning Chains

For complex, multi-step reasoning, I've found it better to:

  1. Break the problem into smaller steps
  2. Validate each step before proceeding
  3. Use the AI as a thinking partner, not an oracle

Asking the AI to solve complex problems in one shot leads to plausible-sounding but wrong answers.

My Current Stack

| Tool | Use Case | |------|----------| | Claude | Long-form writing, complex reasoning | | GPT-5 | Code generation, quick iterations | | Cursor | Development work | | Midjourney | Visual concepts |

The key insight: use different tools for different tasks. No single AI is best at everything.

The Bigger Picture

We're in the "good enough" era of AI. The tools are good enough to be genuinely useful, but not good enough to replace human judgment.

The winners will be the people who learn to work with AI effectively—treating it as a powerful junior employee rather than a magical oracle.

What's Next

I'm currently experimenting with:

  • Custom fine-tuned models for specific tasks
  • AI-assisted research workflows
  • Automated content repurposing pipelines

I'll share the results as I learn more.


Have questions about AI development? Reply in the comments or reach out on Twitter.

Last Updated

Jan 4, 2026

Category

AI