Signal-to-Bid Methodology

Five signal layers. Two decisions. Every impression.

Every eligible impression is evaluated across five proprietary signal layers. The composite score determines whether we bid at all — and exactly how much we bid. This is what agentic optimization actually means at the impression level.

01 →
Creative Intelligence
60+ scored dimensions per asset. Format, persuasion, attention, CTA, contextual signals.
02 →
Competitive Landscape
Next-day SOV by publisher × DMA. Whitespace and battleground identification.
03 →
Audience / Identity
Cuebiq affinity + P_Next scoring. Attain purchase propensity. Device graph resolution.
04 →
Conversion Feedback
Daily device-level conversion feeds. Suppression and real-time bid weight adjustment.
05 →
Context Harmony
Creative signal profile × content environment. Harmony Score (0–100) per impression.
What Makes This Different
Traditional CTV Stack
Broad strokes at campaign launch
  • Audience segments set at kickoff, static through flight
  • Creative intelligence not connected to media buying
  • Competitive data quarterly, weeks of lag
  • Context matching limited to FAST or not present
  • Conversion feedback post-campaign only
  • Uniform CPM across qualifying inventory

The data behind these signals was technically available before. What wasn't practical until now was cleansing it, organizing it, and optimizing off of it at the impression level, in real time. That is the agentic layer.

Layer 01 · PurePlay Proprietary
Creative Intelligence
60+ scored dimensions per creative asset · Benchmarked against competitive set via index scoring
Every creative asset is analyzed across eight dimension categories before the campaign launches. Each asset receives a signal profile that travels with every impression it generates — and gets matched against the content environment at bid time.
Format Classification (9 types) Primary Audience Inference (12 segments) Key Benefits (9 dimensions) Persuasion Signals (8 dimensions) Attention Capture Attention Maintain CTA Classification Audio Brand Mentions Visual Brand Mentions Scene-Level Contextual Extraction Full Transcript Analysis
Each dimension is scored 1–10 and indexed against the competitive set: client score ÷ competitor average × 100. A score of 120 = 20% above competitive average on that dimension. This makes gaps visible — urgency, price framing, audience representation, CTA type — so media strategy can reinforce strengths and fill holes.

Dimension Detail

Format types: Narrative, Testimonial, Showcase, Demo, Lifestyle Montage, Presenter/Host, Graphics Only, Promotional Only, Manifesto/Brand Anthem.

Persuasion signals: Ease of Action, Proof, Authority, Social Norms, Guarantees/No Risk, Discounts, Urgency/Scarcity, Missed Opportunity.

Attention signals: Opening Hook, Pattern Interrupt, Direct Address, Pacing Variation, Emotional Peaks, Visual Contrast, Sound Design, Curiosity Gap, Stakes/Conflict, Surprise/Novelty.
How It Feeds the Bid

Each creative gets a signal profile. At bid time, the optimizer matches that profile against the content environment to produce a Harmony Score (Layer 5). Creatives with stronger contextual alignment earn higher bid multipliers in matching environments — and lower ones where the match is weak.

Layer 02 · PurePlay Proprietary
Competitive Landscape
ACR + set-top box data · Next-day delivery · 68+ publishers tracked
For every market in the campaign footprint, we track each competitor's share of voice on every publisher — delivered next-day, not quarterly. Competitive density determines where to bid aggressively and where to conserve spend.
SOV by Publisher × DMA Whitespace identification Battleground publishers (±10% SOV) Distribution of Volume (DOV) Publisher tier classification New creative launch detection Spend shift alerts 68+ publishers tracked
Whitespace: Publishers where all competitors are below a meaningful threshold — these get aggressive bids. Battleground: Publishers where the client is within ±10% SOV of the nearest competitor — these get prioritized or defended. Saturated FAST environments where competitors dominate get conservative bids or pass.

Why This Matters for Publisher-Direct Buys

Run-of-network deals are strong for scale. But even within a single publisher buy, competitive density varies by daypart, content type, and slot position. The competitive layer adds a signal: are the impressions we're winning inside this deal actually the premium ones, or are competitors owning those? Deal curation through OpenX lets us tighten buys at the impression level without disrupting existing publisher relationships.
How It Feeds the Bid

Competitive signals determine where to bid aggressively (whitespace, low-competition premium publishers) and where to conserve (saturated environments where competitors dominate). Updated daily, so a competitor's new campaign launch changes the bid landscape the next morning.

Layer 03 · Data Partner Dependent
Audience / Identity Signals
Cuebiq (live integration) · Attain (active build) · Device graph via Experian/LiveRamp · 60–70% match rate
Device-level propensity scoring from verified third-party data partners — used upstream for pre-bid scoring, not just downstream for post-campaign measurement.
Cuebiq — Live Integration

Two scores per device, refreshed weekly:

Affinity Index (0–1): Brand loyalty based on frequency and recency of physical visitation. Device at 0.95 = strong loyalty, prime re-engagement candidate.

P_Next (0–1): Predicted probability of visiting a brand location in the next 30 days. Device at 0.35 = 35% predicted visit probability.

Composite: Affinity × P_Next weighted by campaign objective. A device at 0.95 Affinity + 0.35 P_Next earns a meaningfully higher bid multiplier than 0.70 + 0.08. Validated across 145K+ unique device IDs, 1,457 brands, 74 verticals — composite scoring successfully drove differentiated bid multipliers in live traffic.

Attain — Active Build · First-to-Market Opportunity

Attain is already validated as a measurement partner for restaurant and QSR campaigns (ShopRite: $10.94 ROAS, #1 programmatic partner). PurePlay is building the same closed-loop architecture with Attain that's been validated with Cuebiq: pre-bid propensity scoring from deterministic purchase data, with conversion feeds flowing back to continuously adjust bid weights. The first brand to activate this gets first-mover advantage on Attain-powered agentic CTV optimization at scale.

How It Feeds the Bid

High-propensity devices get higher bid multipliers. Devices that have already converted get suppressed. The identity layer is what separates reaching the right person from reaching everyone who met a basic targeting criteria.

Layer 04 · Real-Time Closed Loop
Conversion Feedback Loop
Daily device-level conversion feeds · S3 (Cuebiq) or API (Attain) · 83 learning cycles per 90-day campaign
Conversion data doesn't wait until end-of-campaign to influence buying. It flows in daily and gets applied immediately to bid weights and suppression lists — the campaign is smarter on day 30 than it was on day 1.
Daily suppression: device visited → deprioritize Mid-campaign bid weight adjustment Signal correlation tracking 83 learning cycles / 90-day campaign vs. ~11 for batch-at-window-close
7x learning speed advantage over traditional batch measurement: 83 rolling daily cycles versus ~11 at a quarterly cadence. Which signal combinations are correlating with actual conversions get weighted more heavily in subsequent scoring — the system self-improves every day the campaign runs.

Suppression Logic

If a device already visited a location or completed a target action, there is no reason to keep paying to reach them. Suppression is applied automatically from daily feeds — not at the end of the measurement window. Budget saved here gets reallocated to higher-propensity devices still in the conversion funnel.
How It Feeds the Bid

Conversion data closes the loop between what the algorithm predicts and what actually happened. Signals that correlate with real conversions gain weight. Signals that don't get down-weighted. Every campaign cycle makes the next one more precise.

Layer 05 · Creative × Content Match
Context Harmony Score
Works across all CTV inventory types · Scene-level on FAST · Publisher + category level across premium
Every creative has a signal profile from Layer 1. Every impression opportunity has a content profile. The Harmony Score (0–100) measures how well this specific creative resonates with this specific content environment at the moment of the bid.
Harmony Score: 0–100 per impression Publisher identity + content category Program metadata Scene-level targeting on FAST IAB category alignment Genre matching
A creative scored high on energy, lifestyle relevance, and situational excitement earns a high Harmony Score in sports, action, or high-energy entertainment content — and a lower score in home renovation or news. This is impression-level context alignment, not segment-level audience targeting. It works on top of audience signals, not instead of them.

Connection to Creative Testing

If a resonance test is in-flight, PurePlay's creative scoring provides the explanatory layer: which signals in the creative are driving resonance, and in which environments. Format, persuasion mechanics, audience representation, CTA type — 60+ dimensions scored against the competitive set. When resonance results come in, the signal analysis shows exactly what to replicate, amplify, or fix.
How It Feeds the Bid

High Harmony Score impressions earn higher bid multipliers — the optimizer bids more aggressively to win slots where creative-context alignment is strongest. Low Harmony Score impressions get passed or bid conservatively, regardless of audience match. It's not enough to reach the right person. The creative has to fit the moment.

The Mechanism
From Five Signals to a Bid Decision
In any given minute, say there are 100,000 available auctions that meet a campaign's targeting criteria. Budget pacing requires winning ~1,000 impressions. At a typical 5–10% win rate, that means bidding on 10,000–20,000 auctions and passing on the other 80,000+.
What Most Platforms Do
  • Select which auctions to bid on more or less arbitrarily at scale
  • Apply CPM target uniformly across qualifying inventory
  • Frequency caps and brand safety rules handle the rest
  • Audience segments set at campaign launch, static through flight
  • Optimization happens at end-of-flight or weekly review
What PurePlay Does
  • Rank all 100,000 impressions by composite signal score
  • Select the top-ranked impressions to bid on — the highest signal concentration
  • Set a differentiated bid price per impression based on composite score magnitude
  • Recalibrate the ranked list every minute based on pacing and incoming signals
  • Apply daily conversion data to adjust which signals get weighted most heavily

Two Decisions Per Auction

01
Whether to bid at all
Only impressions that cross the composite score threshold get a bid. Many impressions that technically meet targeting criteria — right audience, right geo, right publisher — still get passed because the combined signal score doesn't justify the spend. This is where budget efficiency is protected.

Agentic vs. Traditional: Capability Comparison

Capability Traditional CTV Stack PurePlay Agentic
Audience targeting Static segments, set at campaign launch Dynamic composite scoring, recalibrated per minute
Creative intelligence Not connected to media buying 60+ dimensions scored and mapped to bid decisions
Competitive landscape Quarterly reports, weeks of lag Next-day, DMA × publisher granularity
Context matching FAST-only or none All CTV inventory, creative-to-content Harmony Score
Conversion feedback Post-campaign measurement Daily suppression + real-time bid adjustment
Bid optimization CPM goals, frequency caps Impression-level propensity scoring, differentiated bid per device
Learning cycles (90-day campaign) ~11 batch cycles ~83 rolling daily cycles — 7x faster
Engagement Options
Self-Service or Managed — Both Run the Full Signal Stack
The engagement model determines where bid execution happens. Both options run the complete five-layer signal architecture. The difference is whether PurePlay's Augmentor controls the DSP-side bid decision, or whether the client's preferred DSP executes against PurePlay-curated, signal-enriched supply.
Managed Service via Augmentor
PurePlay controls bidding end-to-end.
  • PurePlay's custom bidder (Augmentor) deploys inside the DSP environment
  • Both supply-side (OpenX enrichment) and demand-side (Augmentor bid function) are optimized simultaneously
  • Full composite score → differentiated bid price per device, per impression
  • Phase-aware behavior: Discovery → Calibration → Optimization → End-of-Flight
  • Complete pacing control, margin zone management, converter suppression
  • PurePlay manages inventory sourcing alongside the buy

Capability Comparison

Capability Self-Service Managed
All five signal layers
Publisher deal curation (cherry-picked)✓ Full control✓ Full control
Creative intelligence scoring
Competitive SOV data (next-day)
Cuebiq / Attain propensity scoring
Conversion suppression (daily)✓ Via deal floor overrides✓ Direct bid suppression
Context Harmony scoring
Per-impression bid price differentiationSupply-side signals only✓ Full (Augmentor)
DSP usedClient's preferred DSPPurePlay Augmentor
Your Existing Stack
🔗
Attain
Your measurement partner becomes a pre-bid signal
Attain is already trusted for attribution on this business. PurePlay's architecture uses that same Attain data upstream — to score devices before bidding, not just measure results after. The closed-loop architecture (Attain propensity → pre-bid scoring → conversion feedback → bid weight adjustment) is in active build. The first brand to activate this gets first-mover advantage on Attain-powered agentic CTV at scale.
🎬
Innovid DCO
PurePlay identifies the gaps. Innovid executes the mix.
PurePlay's creative intelligence layer identifies which dimensions are winning or losing against competitors — urgency, price framing, audience representation, CTA type. That analysis informs which creative variations Innovid deploys where. Complementary workflow, not a replacement.
📊
Resonance Testing
PurePlay explains why resonance happens
A resonance test shows which audiences respond to which creative assets. PurePlay's creative scoring answers the follow-on question: which signals in the creative are driving that resonance? When results come in, the signal analysis shows exactly what to replicate and what to fix — 60+ dimensions against the competitive set.
📡
Publisher-Direct Deals
Intelligence on top of your existing supply strategy
Run-of-network buys with major publishers are strong for scale. PurePlay's competitive data answers: within that RON buy, is delivery actually premium or defaulting to FAST inventory? Competitive SOV at the publisher + DMA level shows whether competitors are winning the premium slots inside those deals — and deal curation through OpenX tightens the buys at the impression level without disrupting existing relationships.
Validated Results
This Isn't Theoretical
Three proof points across two data partners — foot traffic and ROAS — with third-party measurement in both cases.
+9 pts
The lift from adding the agentic layer
Multimodal creative intelligence alone drove +9% above Cuebiq foot traffic benchmark. Adding the full agentic optimization layer — composite scoring, real-time recalibration, conversion feedback — pushed that to +17% above benchmark. The +9 points is the value of the optimization layer on top of the signal intelligence. That is what the five-layer architecture adds.
+17%
above Cuebiq foot traffic benchmark · full agentic optimization vs. industry standard
Sunglass Hut
$10.94
ROAS · #1 programmatic partner across all channels · Attain-measured, third-party validated
ShopRite
145K+
unique device IDs scored in live traffic · Affinity × P_Next composite successfully drove differentiated bid multipliers
Cuebiq Integration
Measurement Partners
Partner Role Status Validated Outcome
Cuebiq Foot traffic attribution + pre-bid scoring Live +17% foot traffic lift · differentiated bid multipliers in live traffic
Attain Purchase attribution + pre-bid scoring (building) Active Build $10.94 ROAS (ShopRite) as measurement partner · pre-bid architecture in development
Experian / LiveRamp Device graph identity resolution Live 60–70% MAID-to-CTV match rate on Experian graph