Log in
May 13, 2026 · 10 min read

Six Million Interactions Later: A Field Report

Three years. 2,119 widgets. 6,413,413 answers. 159 publishers. The patterns that show up in the data — and what they mean if you ship interactive content for a living.

iTBy ithinktoday editorial

Six million answers is the kind of number where you find out which of your beliefs were actually right.

We have been running iThinkToday in production for a little over three years. As of this morning, the engine has served 2,119 interactive widgets — quizzes, polls, personality tests, photo battles — across 159 publishers, and the readers of those publishers have produced 6,413,413 answers, votes, and completed sessions.

This is what those numbers actually say.

6,413,413Total answers, votes & completionsacross all formats, since launch
2,119Interactive widgets shippedquizzes, polls, personality tests, photo battles
159Publishers in productionnews, sports, lifestyle, education, B2B

The top-line is one number, which is the wrong number to plan around. The interesting story is in the breakdown.

The scoreboard, by format

What 2,119 widgets actually look like

Counts since launch · responses include answers, votes, and completed sessions

Workhorse
Quizzes
1,356widgets shipped
  • Total responses5,833,508
  • Avg per widget~4,300

The bread-and-butter format. Craft is saturated.

Velocity
Polls
501widgets shipped
  • Total responses414,215
  • Avg per widget~827

Lowest avg per widget. Highest authoring velocity.

Specialist
Photo battles
212widgets shipped
  • Total responses103,652
  • Avg per widget~489

Deepest sessions. Hardest to author.

Under-shipped
Personality tests
50widgets shipped
  • Total responses62,038
  • Avg per widget~1,241

Highest avg per widget by a 1.5× margin.

The most-shipped format isn't the highest-leverage format. The highest-leverage format isn't the most-shipped format. That gap is the whole story.

A few things in that table that we did not expect three years ago.

Quizzes carry the volume, but they are not the highest-leverage format per unit of editorial effort. They are simply the format publishers reach for first, because the craft is the most familiar. 1,356 quizzes is what publisher muscle-memory looks like.

Polls are velocity-shaped, not depth-shaped. Each poll captures fewer responses than each quiz, but polls are roughly ten times cheaper to author. The right way to read the 501 number is not "polls underperform" — it's "polls are how publishers handle the long tail of opinion content that would never have justified a quiz."

Photo battles are the deepest format we measure. The widget count is small (212), but each surviving battle does work no other format does: median session depth on a photo battle is roughly 6× a quiz, and 48-hour return rates clear 30%. They are also the hardest format to author well — every one of those 212 was a deliberate editorial choice, not a default reach.

Personality tests are the headline. Fifty widgets. Sixty-two thousand responses. ~1,241 responses per widget — the highest average of any format we run, by a margin of 1.5× over quizzes. And yet they are the rarest thing publishers ship. Of the 159 publishers in our production data, only 31 have ever shipped a personality test.

What the under-shipping says

A personality test, on average, produces 1,241 responses. A poll produces 827. A photo battle produces 489. The format publishers ship the most is not the format that captures the most attention. The format that captures the most attention is the format publishers think hardest about whether to ship at all.

The single most-mispriced format

The reason is mechanical. A quiz feels like editorial — there's a thing to know, an answer to write, a fact to check. A poll feels like a question, which feels editorial-adjacent. A photo battle feels like a tournament, which feels editorial-adjacent.

A personality test feels like a quiz with the answer thrown away. Editors trained on knowledge-quiz craft find the format unsettling: there is no right answer, the result is a label, and the production looks (to a journalist) closer to astrology than to reporting.

The numbers do not care. Identity-shaped content captures the audience that information-shaped content used to. That is the single biggest editorial-product decision the publishers in our top decile have made over the past two years — and the data above is what that decision looks like at scale. The longer-form companion: Personality Tests as Audience Engines.

Three patterns we keep seeing

Across the 2,119-widget corpus, three patterns show up in every successful publisher's data — and their inverses show up in every failing one.

1. The single biggest lift is the embed

We have been collecting embed-strategy data since 2024. The numbers have not moved. Moving off an eager iframe and onto a native web component remains the single largest engagement gain available in the category — usually larger than any change a publisher can make to the content of the widget itself.

Start rate by embed strategy

% of readers who began the interactive · n = 14,200 sessions across the production set

★ sweet spot86%
71%
38%
34%
24%
Native web component (inline)Element renders as part of the article.
Hydrated React island (inline)Hydration cost, but element-shaped.
Lazy iframe (inline)Boots after scroll-in; reader lags.
Eager iframe (inline)Blocks LCP, tanks INP.
Iframe in a modalModal trigger = a second decision.
The content is the same. The reader is the same. The vendor's rendering decision is the only variable. This is the single biggest knob in the category.

A publisher running the same widget catalog twice — once on iframe, once on a native web component — will see start-rate differences of more than 2× before any editorial change. That fact deserves more discussion than it gets. The longer-form companion: Why Native Web Components Beat Iframes.

2. The result page is where 80% of share rate is made or lost

The single biggest controllable lever inside the widget itself is the result page. Not the title. Not the question count. Not the visual design of the question screens. The result page.

Three rules, repeated almost verbatim in every playbook we've shipped:

  • One line of interpretation, not five. Long result pages compress the share moment.
  • Specific praise on top results. "You're in the rarest 7%" outshares "Well done!" by ~3×.
  • One curated next step, not a list. Three "related articles" generate ~40% of the recirculation a single tuned link does.

The playbooks — quizzes, polls, photo battles, personality tests — are 80% the same advice on the result page rendered in four different shapes. We do not apologize for that. The format does not change the rule.

3. Cadence beats hero

The publishers in our top decile of total responses have one thing in common, and it is not that they shipped a hit. It is that they shipped monthly.

12-month total responses by publisher cadence

Median publisher in each cadence band · n = 159 publishers

★ sweet spot284K
196K
84K
32K
WeeklyHighest. Compounds via newsletter cadence.
MonthlyThe pragmatic sweet spot.
QuarterlyOne hit, three quiet months.
One-off / hero-modeAlways underperforms a routine.
The hero-mode publisher who shipped one big quarterly quiz and then went quiet underperforms the monthly cadence publisher by roughly 3×, twelve months in.

The hero-mode publisher — the one who decided to ship one big quarterly interactive and then nothing — underperforms the monthly cadence publisher by roughly 3× over twelve months, even when the hero quiz outperforms any single monthly one. The audience does not optimize for our peaks. It optimizes for a routine to expect.

This is the publisher-side change that has produced the most net response volume in our production data, year over year. It is also the cheapest change to make: pick a slot in the editorial calendar, hold it, ship into it.

What we got wrong

A field report that does not include "what we got wrong" is a brochure. Three honest ones.

Bets that paid off

  • Web components from day one. The 2023 decision to ship as <my-question> instead of an iframe widget is the single technical bet that aged best. Three years later it is now also the brand argument.
  • AI for first drafts, never for final copy. Authoring assistance speeds the editorial team up; replacing the editorial team does not. Quality stayed high because we refused the temptation.
  • One pricing tier publishers can actually afford. Flat per-month, not per-response. Removed the perverse incentive to hide widgets behind low-traffic placements.

Bets we'd un-make

  • The "embed code" onboarding flow we shipped in 2024. Designed for engineers, hostile to editors. We rewrote it twice; we should have shipped the editor-first version first.
  • Underweighting photo battles in our own examples. For the first two years we showed quizzes everywhere and battles almost nowhere — which is exactly the editorial trap we now diagnose in our customers. Hypocrisy noted.
  • Letting the dashboard get ahead of the editorial workflow. Better analytics did not produce better content. Better authoring tools did.

The second one in particular: the bias we keep pointing out in publisher data — that the format with the highest per-widget leverage is the most under-shipped — was also true of us. We hit our own diagnosis.

Where the line is going

A few cautious bets for the next twelve months, made publicly so we can be wrong in writing.

  • Identity-shaped formats keep gaining share. Personality-test response volume in our production set grew 2.4× year over year, off a small base. Knowledge quizzes grew 1.2×. The gap is not closing.
  • The iframe quiz vendor is a vanishing category. Of the 159 publishers in our production data, 32 migrated from an iframe-based competitor in the past 18 months. None migrated the other way. That is not a sample, but it is a signal.
  • AI-agent authoring becomes a real distribution channel. Our MCP server is one of the fastest-growing surfaces in the product — not because every reader wants their content AI-generated, but because the publisher's internal AI workflow now reaches into the widget toolkit. The longer that pattern runs, the more it shapes the category.

The most boring thing in the data, and the thing we are most confident about, is the third pattern from earlier: cadence beats hero. Whatever the format mix turns out to be, the publishers who will compound the audience over the next twelve months are the ones holding a slot in their calendar for an interactive. The publishers who will not are the ones still treating these formats as one-off campaigns.

6.4 million answers do not tell you what to ship next. They tell you what publishers, given a free plan and three years, actually shipped — and what their audiences answered.

The honest sales pitch

iThinkToday powers every widget linked in this post — and every widget counted in the numbers above — on a single native custom element, with the same engine on the free plan as on the enterprise tier. The free plan is enough to test the patterns from this report on one of your own articles and measure them against your current vendor's numbers. If you want to skip the meeting, the for-publishers page has the short version.

The longer-form companions — what specifically to ship inside each format — are: the pillar quiz playbook, the polls playbook, the photo battle playbook, and personality tests as audience engines. The engineering case for the embed architecture is in Why Native Web Components Beat Iframes.

This report will be updated when the numbers move enough to make it worth reading again.

Share this post

Ship interactive content your readers actually finish.

Quizzes, polls, personality tests, and photo battles — embedded as a native web component, not an iframe. Free plan available.