Most media buyers start creative development by brainstorming ad concepts or briefing designers. They think creativity comes from inspiration, talent, or gut instinct about what will resonate.
Isaac learned the opposite is true. After spending $4M+ profitably across Meta and Google, he discovered that research determines creative success far more than creative talent ever could.
Isaac, a verified media buyer managing $4M+ in ad spend, built his entire creative process around research-driven briefs rather than designer intuition. While others guess what might work, Isaac uses systematic research to identify what should work before ever creating an ad.
The Creative Testing Problem Nobody Talks About
When Isaac looks at struggling accounts, he sees the same pattern repeatedly: media buyers testing creative concepts that were doomed from the start.
Not because the ads look bad. Not because the copy is weak. But because the fundamental concept – the angle, the persona, the offer positioning – was never validated through research before spending thousands on production and testing.
“A lot of people don’t do research very well,” Isaac observes. “They might have a client who’s already done some research, and they just accept that and move forward.”
The problem compounds when you’re working with accounts that don’t have extensive performance history. New brands, new markets, products without proven ad creative – these are the situations where most media buyers struggle because they don’t have historical data to guide them.
Isaac’s insight: research replaces the performance history you don’t have yet.
Instead of testing blindly and hoping something works, you use systematic research to identify the highest-probability concepts before spending a dollar on creative production.
“We need to be looking at the industry landscape,” Isaac explains. “What are your competitors doing? What is the industry doing? What are the trends? If you’re running social ads, what if there’s a new social media trend getting great results? You need to know that.”
The Research Framework That Feeds Creative Development
Isaac’s research process has four distinct layers, each serving a specific purpose in the creative development pipeline:
Layer 1: Competitor Analysis
Isaac starts by mapping what competitors are actually running, not what they ran six months ago.
“What are your competitors doing right now?” This means systematic review of their current creative across platforms. What hooks are they using? What offers are they leading with? What formats are they testing?
But Isaac goes deeper than surface-level ad library browsing. He’s looking for patterns: Are multiple competitors using similar angles? That suggests validation. Is one competitor using a totally different approach? That might be a differentiation opportunity.
The key insight: your competitors have already spent money testing concepts. Their current creative represents what survived their testing process. You’re essentially getting free market research by analyzing what they’re still running.
Layer 2: Industry Landscape Mapping
Beyond direct competitors, Isaac looks at the broader industry context.
“What is the industry doing?” This means understanding macro trends, regulatory changes, seasonal factors, and market dynamics that might affect creative performance.
For a health supplement brand, this might mean understanding FDA guidance changes, ingredient trend cycles, or emerging research that competitors will eventually leverage. For a B2B software company, it might mean tracking which pain points are becoming more acute in the target market.
Layer 3: Platform Trend Analysis
This is where most media buyers miss opportunities.
“If you’re running social ads, what if there’s a new social media trend getting great results? You need to know that.”
Isaac tracks emerging content formats, trending sounds, viral hooks, and platform-specific trends that can be adapted for paid advertising.
A trending format on organic TikTok today might become a high-performing ad structure tomorrow. A viral hook in Instagram Reels might translate perfectly to a paid ad concept. But you only leverage these opportunities if you’re actively monitoring them.
The insight: platform trends move faster than industry trends. Catching them early gives you a performance edge before they become saturated.
Layer 4: Historical Performance Review
Finally, Isaac analyzes performance data – both from the client’s account and from similar clients in comparable niches.
“We also want to be looking at your own account for performance across your other clients if you’re operating in similar niches.”
What personas converted best? What offers drove the highest ROAS? What creative angles sustained performance longest? Which formats showed the most scale potential?
This layer is less available for newer accounts, which is precisely why the first three layers are so critical. When you don’t have your own performance history, you build knowledge from competitor behavior, industry context, and platform trends.
He leverages AI and automation in a way most media buyers haven’t considered.
“The research allows you to build briefs that have been purposely designed for testing at scale,” Isaac explains. “When we’re working with enormous accounts, a lot of briefs need to get built and it makes no sense doing that with a huge team of people.”
Isaac’s system: “You need to be using AI, Zapier, Notion AI, ChatGPT agents to allow you to build briefs at scale.”
Here’s how this works in practice:
The research insights get structured into a knowledge base – competitor angles, industry trends, platform formats, performance patterns. This knowledge base becomes the foundation for AI-powered brief generation.
Isaac can then use AI agents to generate dozens of creative briefs that combine these research elements in different ways. A brief might combine a competitor angle with a platform trend and a proven persona. Another might test a new offer positioning against an emerging industry trend.
The efficiency gain is dramatic. What might take a team days to develop – generating 20-30 unique creative briefs with clear strategic rationale – can be done in hours with AI assistance when you have strong research foundations.
The Testing Hierarchy: What to Test First
Once you have research-driven briefs, Isaac follows a clear testing hierarchy that determines what gets tested first:
Priority 1: Personas
“The most important thing you need to establish are who are your top-performing personas.”
Everything else depends on knowing who you’re targeting. A brilliant creative angle targeted at the wrong persona will fail. A mediocre creative angle targeted at the right persona might succeed.
Isaac’s research process specifically identifies persona signals: Which customer segments are competitors focusing on? Which pain points dominate industry discussions? What demographic patterns emerge from performance data?
Test personas first, establish your top 2-3, then build everything else around them.
Priority 2: Offer
“After personas, I think it’s offer – offer is the most important thing.”
Once you know who you’re targeting, test how you position your product. Is it a discount offer? A transformation promise? A risk-reversal guarantee? A limited-time opportunity?
Priority 3: Creative Angle
“And then after that, you’re looking at your creative, which is the angle.”
With persona and offer established, now you test the specific angle – the hook, the story structure, the emotional appeal, the logical argument.
The research phase should have generated a library of potential angles based on what’s working for competitors, what aligns with industry trends, and what platform formats are gaining traction.
Priority 4: Format
“And then the format it appears in.”
Only after validating persona, offer, and angle do you optimize format – video vs static, length, aspect ratio, visual style.
This is the variable most media buyers test first because it’s easiest to execute. Isaac tests it last because it’s least likely to create breakthrough performance improvements.
“That never really stops. You are always testing that creative angle because creative fatigues.”
Why Small Variations Waste Money
One of Isaac’s most counterintuitive insights relates to test size. Most media buyers test small variations – blue button vs red button, slightly different headlines, minor copy tweaks.
Isaac thinks this approach is fundamentally flawed.
“What people get a lot wrong with creative testing is to go ‘oh, shall we test the blue background and the red background and see which one’s working?’ And it’s an insane way to do things.”
His reasoning is mathematical: “The amount of data you need to validate small changes in your target metric becomes proportionally bigger.”
Here’s the math Isaac uses:
“If I’ve got two ads and one’s 100% more efficient than the other one, then I don’t need a lot of data to validate that. But if one ad’s only getting a 1% improvement in CPA, then you could literally need to spend tens of thousands of pounds or dollars on that one test to actually validate it.”
Think about this: If Ad A gets you a $50 CPA and Ad B gets you a $49 CPA (2% improvement), how much would you need to spend to be confident that difference is real and not just random variance? Potentially $10,000-20,000 or more.
But if Ad A gets you a $50 CPA and Ad B gets you a $25 CPA (100% improvement), you know within the first $500-1,000 that you have a winner.
“The way you get those vastly different levels of performance is by doing large tests. Do tests where you’ve got one idea and it’s totally different from the other idea.”
Isaac’s recommendation: test completely different concepts, not variations of the same concept.
Don’t test “beach scene vs mountain scene.” Test “transformation story vs feature explanation vs social proof compilation.”
Don’t test “blue CTA vs red CTA.” Test “discount offer vs guarantee offer vs scarcity offer.”
“This is so important, especially for newer accounts.”
When you don’t have performance history, you need to find what works quickly. Large tests get you there faster and cheaper than small variations ever could.
This philosophy connects directly back to the research-first approach. When your research generates fundamentally different concepts to test – different personas, different offers, different angles – you’re naturally doing large tests. You’re not testing minor variations because you haven’t settled on what fundamental approach works yet.
Building Creative Diversification
Isaac’s research-first method serves another critical purpose: building creative diversification that protects account performance.
“You want to have a portfolio of personas, because it’s all well and good getting really deep into one persona, but if that persona stops working for whatever reason, if you have two or three others that maybe not performing as well as your top persona but they are performing well, it gives you a lot more safety, diversification, longevity.”
The research phase identifies multiple viable personas from the start. You’re not putting all your eggs in one basket, hoping a single persona continues performing indefinitely.
The same applies to offers, angles, and formats. Research should generate a portfolio of options that work to varying degrees. Your top performer might be a transformation-focused video ad targeting persona A. But you should have validated concepts for personas B and C, alternative offers that work acceptably, and backup angles that maintain profitability.
“That never really stops. You are always testing that creative angle because creative fatigues.”
The research-first approach creates a constant pipeline of new concepts to test. You’re never starting from scratch when creative performance declines because you have a backlog of research-validated concepts waiting to be produced and tested.
Determining Test Volume
Isaac’s research process can generate dozens of potential creative concepts. But how many should you actually test simultaneously?
His framework is mathematical: “You want to get enough conversions in a week to be able to optimize. I think the bare minimum is around 10 conversions a week mark.”
Here’s the formula:
Weekly Budget ÷ Target CPA = Weekly Conversions Available
Weekly Conversions Available ÷ 10 = Maximum Number of Tests
Isaac gives an example: “If your weekly budget is $10,000 and your target CPA is $20, then you’ve got enough budget to run 50 tests because you want to give each ad enough budget so it can get seven to ten of those conversions in a week.”
The math: $10,000 ÷ $20 = 500 conversions per week. 500 ÷ 10 = 50 potential tests.
But context matters enormously.
“If you have a really strong high performing account, 20% budget, 10% budget for testing, especially if it’s big budgets.”
When you have proven winners performing well, you don’t need to allocate 100% of budget to testing. You can scale your winners while testing new concepts with 10-20% of budget.
“If you’re in a crisis situation or the account is brand new, then you want to spend all of your budget on creative testing.”
New accounts or struggling accounts need to find what works, which means maximum budget allocation to testing until you identify reliable winners.
In Summary
“The media buyers who master research-first creative development consistently outperform those relying on creative intuition alone. Research doesn’t replace creativity, it directs creative energy toward the highest-probability concepts before you spend a dollar on production or testing.”
The research-first approach is especially valuable in these crisis scenarios.
When you need to find winners quickly and can’t afford to waste budget on low-probability tests, systematic research dramatically improves your hit rate.
You can get Isaac’s 9 Rapid-Fire Questions With A $4M+ Media Buyer
For more, watch the full Q&A interview with Isaac here.