How to Write Better Prompts for GPT o1 and other models

Using Ben Hylak’s Templated Approach for ChatGPT’s Reasoning Models
How to Write Better Prompts for GPT o1 and other models

📐 Prompting Isn’t Magic. It’s Modular.

The biggest mistake is to write prompts like conversations.
GPT models don’t think like humans—they parse structure, not sentiment.

That’s why structured prompts consistently outperform casual ones.

Here’s the format:

⚙️ The 4-Part Prompt Formula

  1. 🎯 Goal – What exactly do you want the model to do?
  2. 📦 Return Format – What should the response look like?
  3. 🚧 Warnings & Constraints – What to avoid, double-check, or clarify?
  4. 🧠 Context Dump – Everything that makes your request unique or personal.

🔍 The 4-Part Prompt Structure (Explained)

Ben Hylak’s template for GPT o1 works because it aligns with how these models are designed to respond best: when they’re given clear instructions, a defined output format, any relevant constraints, and rich context. This structured approach helps the model minimize hallucinations, interpret your intent more accurately, and produce more useful responses. You’re not just prompting — you’re briefing a system optimized for reasoning under instruction.

Here’s how each part works:

🟩 1. GoalWhat are you trying to accomplish?

This is the first sentence or paragraph of your prompt.

It should tell the model what the end result should look like, as if you were delegating a task to a smart intern.

✅ Example:

I want to draft a professional summary of our team’s monthly performance report to send to the department head.

This sets the scope, tone, and intent. Be specific but not overstuffed. Think: What should GPT do — not how yet.


🔵 2. Return FormatHow should the response be structured?

This section tells GPT how you want the answer delivered. Bullets? Table? Email-style summary? Formal report?

The clearer the output format, the less GPT has to guess.

✅ Example:

Return the summary as a professional email. Include key highlights: completed projects, KPIs, blockers, and any pending actions. Keep the tone formal but approachable.

If you skip this step, you risk getting long rambly text or inconsistent answers.


🔴 3. Warnings or ConstraintsWhat to avoid? What to verify?

This is where you highlight risks or add non-negotiables. GPT often hallucinates — so give it instructions to double-check, avoid assumptions, or flag uncertainty.

✅ Example:

Only include projects that were marked as completed in the report. Avoid overly technical jargon or internal acronyms. Double-check that any numbers or percentages match the data I provided.

Warnings help GPT prioritize truthfulness and quality over creativity when that matters.


⚪ 4. Context DumpWhy does this matter? What background can help?

This is the “everything else” zone: your motivations, edge cases, personal preferences, emotional drivers, etc.

This section is optional but hugely powerful. It’s what makes GPT feel like it “gets you.”

✅ Example:

This email is part of a monthly update we send to upper management. The goal is to highlight progress and raise visibility on blockers without sounding like we’re making excuses. The department head likes concise emails (ideally under 250 words). We’ve been working remotely and asynchronously, so clarity is important.

The context helps GPT choose better answers for you — not just generically “good” ones.


🧠 Why This Structure Works So Well

GPT isn’t just predicting words — it’s reasoning through your instructions. This format helps the model:

  • Understand your intent clearly.
  • Structure output to reduce back-and-forth.
  • Avoid classic mistakes (hallucinations, irrelevant answers).
  • Personalize results to your needs.
Impact Effort Matrix Template
The Dip by Seth Godin – Book Summary 📘