Test Campaign
Every engagement with Specify starts with a test campaign. It’s a one-time, fixed-price campaign designed to tell you — and us — whether your product is a fit for conversions-focused performance marketing.
It’s also a lot of work on our side. The test isn’t just “run some ads and see what happens”; it’s where we calibrate Specify to your specific product, verify every piece of the attribution pipeline, and produce a report detailed enough to base real decisions on.
What it is
$1,000 for 15,000 impressions. One-time. No commitment to continue.
Every project goes through this step, regardless of size, stage, or existing traction. It’s how we calibrate pricing, confirm targeting, and generate the performance data that results-based pricing depends on.
What’s included
The test campaign uses the full Specify stack — nothing is held back:
- Full targeting — onchain behavioural targeting, project selection, chain narrowing, exclusion targeting
- Full conversion tracking — 14-day attribution window, view-through conversions, wallet grouping, auditable dashboard
- Contract and targeting calibration — see below
- An extensive post-campaign report — see below
- A walkthrough call — we go through the report with you, answer questions, and plan next steps
The work we do before ads go live
A core purpose of the test campaign is fine-tuning Specify to your specific project. The system works out of the box for most products, but a few things need project-specific work to hit 100% accurate results.
Before any impressions are served, we:
- Verify your contracts. Specify maintains a database of 100,000+ smart contracts mapped to the projects they belong to. For your project, we manually review every smart contract across every chain you deploy on — and fix any mappings that are incorrect or incomplete. A single miscategorised contract can quietly distort conversion numbers, so we treat this as a non-optional setup step
- Confirm your targeting definitions. For every project you select in your targeting, we manually check that our definition of “a user of Project X” genuinely captures that project’s audience — no false positives from forks, unrelated deployments, or contracts sharing a name
- Set up any needed custom integrations. If your product has offchain activity that matters for conversion attribution, or depends on third-party APIs for volume data, we build and test that pipeline before the campaign starts rather than patching post-hoc
This pre-launch work is the least glamorous part of the test campaign, and it’s also the reason the data coming out the other side is trustworthy.
What’s in the report
At the end of the campaign we produce a detailed report — typically 10–15 pages — designed to tell you what happened, why it happened, and what to do next. The exact contents vary with your product, but a typical report includes:
Headline numbers — total conversions, attributed volume, and incremental lift at a glance.
Conversion breakdown — conversion rate, new users vs. returning inactive users, unique wallets, total transactions, average dormancy of returning users, and average volume per user.
Dormancy distribution — a breakdown of returning users by how long they’d been inactive before converting (e.g. 45–60 days, 60–90 days, 90–180 days, 180+ days) with the volume each cohort generated. Tells you which segments the campaign was best at reactivating.
Volume by chain — volume, transaction count, and average transaction size per chain. Shows you where your highest-value converters actually operated, and informs chain-specific targeting on follow-on campaigns.
Volume distribution — the top 10%, top 50%, and bottom 50% of converters ranked by volume. Almost always reveals that a small cohort of users drives the majority of volume — which is valuable to know before you plan the next campaign.
Post-conversion behaviour — repeat usage within the attribution window: how many users came back for 2+, 3+ transactions, and the average transactions per converting user. Tells you whether you acquired one-and-done users or people who stuck around.
Conversion persona analysis — what other projects and protocols your highest-value converters actually use, expressed as “users in this cohort are X percentage points more likely to use Protocol Y than a low-value converter.” This is unique to Specify; it’s only possible because the behavioural data is public onchain. It tells you specifically where to find more users like your best ones.
Incrementality study — the formal lift measurement, including control-group methodology, conversion rates for both groups, lift ratio, and statistical significance. See the Incrementality page for the full explanation of how this works.
Calibration notes — any contract mapping corrections or integration issues we identified and fixed during the campaign, and what we put in place to prevent them in future.
Recommendations and open questions — targeting refinements, follow-on campaign angles, and specific questions for your team about priorities (acquisition vs. reactivation, which user segments are most valuable to you, which onchain actions generate revenue, etc.).
We then walk through the report with you on a call — to answer questions, go over findings in context, and plan next steps together.
What happens next
After the report, you have three options:
- Move to results-based pricing — fixed CPA or tiered volume-based, with the rate set from what the test showed
- Run another test — if we want to validate changes to targeting, creative, or conversion definition before committing
- Walk away — no commitment, no follow-on obligation
If the results are strong, we move forward. If they aren’t, you’ve learned something specific about your product, your users, and your web3 performance marketing potential — for $1,000 and a couple of weeks.
Contracting
For the test campaign we put things in writing with a signed IO (Insertion Order) covering scope, price, impression volume, and timeline. Clear paper trail, no ambiguity.
Once we scale beyond the test into ongoing campaigns, we typically move to your app’s standard advertiser policy / terms of service as the governing agreement. That keeps turnaround fast for subsequent campaigns — you shouldn’t need to chase a new signed document every time you want to launch something.
Why every project does this
Skipping the test campaign isn’t an option, and it’s a deliberate choice on our part.
Results-based pricing only works if the pricing genuinely reflects your product’s performance. We can’t set a fair conversion price without real data, and you shouldn’t agree to one without seeing how Specify performs on your specific audience. The test campaign is the shortest path to getting both — and the calibration work that happens during it means every follow-on campaign benefits from a cleaner, more accurate pipeline from day one.