Using ChatGPT as a ruthless skeptic to find holes in your outline before you film.
Source: This technique comes from Caleb Ralston's consultation with Taki Moore. Caleb is the content strategist behind Gary Vee and the Hormozis for 16 years.
The default behavior of AI assistants is to agree with you. Ask ChatGPT if your idea is good, and it will find reasons to tell you it's good. That's not useful when you're trying to find weak points before you commit to production.
The trick is to explicitly instruct it to disagree. To rip the outline apart. To try to disprove your claims before you've invested hours filming content that doesn't hold up.
"I tell it to be ruthless," Caleb explained. "Rip every single thing apart."
The Default Problem: AI Just Agrees
Without specific instructions, AI tools are optimized to be helpful in a way that feels supportive. They validate your thinking, highlight the strengths in your ideas, and downplay potential problems.
This is fine for brainstorming. It's terrible for quality control.
If you share an outline and ask "is this good?", you'll get a response that emphasizes the good parts and gently suggests improvements. What you won't get is someone telling you the core concept is flawed, the angle isn't differentiated, or the audience won't care.
That kind of feedback is what you actually need, but you have to ask for it explicitly.
Setting Up the Skeptic Frame
The prompt structure that works is direct and explicit about the adversarial role you want.
Something like:
"I'm going to share a content outline with you. Your job is to be a ruthless skeptic. Try to disprove every claim I'm making. Find the weak points. Tell me what doesn't work and why. Don't be supportive or encouraging, be critical."
This reframes the interaction from helpful assistant to adversarial reviewer. The AI is now looking for problems instead of validating strengths.
You can intensify this further:
"Pretend you're a competitor who wants my content to fail. What would you attack? Where are the holes I'm not seeing?"
Or:
"Be the cynical viewer who's seen a hundred videos like this. What makes them scroll past mine?"
The specific framing matters less than the explicit instruction to be critical rather than supportive.
Section-by-Section Refinement
Once the skeptic frame is established, work through the outline section by section.
Start with the hook. "Here's my opening hook. Try to break it. Why would someone scroll past this instead of stopping?"
Move to the promise. "Here's what I'm promising the viewer. Is this promise specific enough? Is it differentiated? Is it actually deliverable in this video?"
Test the proof. "Here's my evidence for this claim. What's missing? What would a skeptic demand to see before believing this?"
Challenge the structure. "Here's my section breakdown. Does this build logically? Where do I lose momentum? What's unnecessary?"
Stress test the conclusion. "Here's how I'm ending. Is this satisfying? Does it deliver on the promise? Is the audience glad they watched the whole thing?"
Each section gets interrogated separately, which produces more specific feedback than a general "is this good?" review.
When to Stop (The Evergreen Test)
There's a point where the skeptic has done its job and further criticism isn't productive. Caleb mentioned testing content against an "evergreen" standard.
"Will this be relevant in 2032?" The question isn't about predicting the future; it's about identifying whether your content depends on temporary circumstances.
If the claims you're making only work right now, because of a specific trend or a temporary market condition, the content has limited shelf life. If the claims work regardless of when someone watches, the content compounds over time.
The AI can help with this: "Try to disprove this content from the perspective of someone watching it in 5 years. What becomes dated? What still holds?"
Content that survives the evergreen test is worth the production investment. Content that fails it might still be worth making, but with different expectations about longevity and ongoing value.
Practical Workflow Integration
For teams producing volume, the AI skeptic can become a standard checkpoint in the pre-production process.
Before filming: Every outline runs through the skeptic prompt. If the AI finds fundamental problems, the outline goes back for revision.
After script completion: The full script gets tested. "Where does this drag? Where does the argument get weak? Where would the viewer lose interest?"
For title and thumbnail: "Here's my title and thumbnail concept. Why would this fail to get clicks? What's the immediate objection a viewer would have?"
For claims and statistics: "Here's a specific claim I'm making. How would a fact-checker challenge this? What evidence would make this more credible?"
The goal isn't to make content that's invulnerable to criticism. It's to find the obvious weak points before you've committed production resources.
Most content fails for predictable reasons. The concept wasn't interesting enough, the angle wasn't differentiated, the promise wasn't clear, the evidence wasn't compelling. These problems are much easier to fix in a Word document than in a finished video.
The AI skeptic catches the predictable failures early, when changing direction is cheap.











































