Skip to main content

How to measure incrementality for ASO & paid UA

Run experiments to measure the true performance of your ASO efforts or marketing events

P
Written by Product Team
Updated over 10 months ago

What is incrementality?

The term incrementality can be used in various ways, so let’s clarify what it means at AppTweak.

👉 Incrementality helps you measure the true impact of your marketing efforts by comparing what actually happened to what would have happened without the event.

In AppTweak, we use predictive modeling with NeuralProphet to help you isolate, quantify, and understand the impact of your work. We do this by:

  1. Establishing a baseline scenario—a statistical estimate of what your performance would have been without an event.

  2. Comparing this baseline to actual results, so you can accurately measure incremental impact.

Want to see it in action? Explore practical examples to run your first analysis, interpret the results, and gain clarity on your ASO impact.

Measuring incrementality helps you answer strategic questions with more confidence, like:

  • Did updating my app metadata drive significant Search Impressions or Store Listing Visitors?

  • Which in-app events/promotional content had a true impact on revenue, beyond expected trends?

  • What was the incremental impact of my app's rebrand on performance?

And much more. By connecting your Apple Search Ads or MMP (Adjust, AppsFlyer) accounts, you can measure the incremental impact of organic, paid, and external campaigns on your most important KPIs.


How does AppTweak compute incrementality?

Take a look at the example below:

  • The yellow line shows actual downloads over time. Notice the spike caused by an event.

  • The green line shows predicted downloads if the event had not occurred.

  • The red area between the yellow and green lines represents incremental downloads gained from the event.

  • The blue area indicates the total predicted downloads without the event.

  • The incremental impact is calculated as the difference between the actual data (red area) and the predicted data (blue area), divided by the predicted data (blue area), and expressed as a percentage:

How does our incrementality model work?

Our predictive models learn from historical data to calculate what would have happened if no marketing action had been taken. This is achieved through two different approaches:

  • 1️⃣ Extrapolation Model (default): Only uses pre-event data to predict what would have happened. Best for events where the impact can be observed in the long term with no limit in time (like metadata updates or branding changes), making it the standard model for most use cases.

  • 2️⃣ Interpolation Model: Uses both pre-event and post-event data to refine predictions. Best for events where the impact is observed only in the short term and within a fixed duration (like on/off paid UA campaigns), where performance returns to a baseline after the event ends.

💪 Note that the model trains on three years of historical data preceding your event start date (or the maximum available period if less than three years) to improve prediction accuracy.

The model accounts for:

  • Trends – The overall direction of data, using changepoints for flexible trend modeling.

  • Seasonality – Recurring patterns, focusing on weekly or yearly cycles.

  • Holidays & Events – Spikes linked to date-specific events (e.g., Christmas Day, New Year's Eve) that may affect app performance.

These factors help remove statistical noise and isolate true impact.


How to set up an experiment

Watch this short demo for a quick look at launching your first incrementality experiment. Or watch this video series for a deep dive into examples.

1. Select what to measure

First, choose the metric you want to measure. You can select any quantifiable metric like Downloads, Clicks, Impressions, or Revenues from all data sources available in Reporting Studio:

  • AppTweak metrics

  • App Store Connect

  • Google Play Console

  • Apple Search Ads

  • AppsFlyer MMP

  • Adjust MMP

Next, select the country (or multiple countries—values will be aggregated) and the app for the experiment.

Once selected, a graph of your chosen metric will appear to verify the correct data selection. The Next button activates when the graph is displayed.

2. Enter event details

Then, define the marketing action you want to measure:

  • Event name (e.g., "Metadata update - January").

  • Event start date, which can be up to 7 days before today.

  • Event end date, which can be up to 31 days after the start date.

  • Post-event period (optional): Number of days after the event for which you want to analyze lingering effects (up to 31 days post-event).

Use the date range picker to visualize the selected period. By default, we display one year of data.

When to use the post-event period?

Setting a post-event period allows you to analyze whether the impact of an event extends beyond its end date.

Use this setting to detect prolonged effects on your selected metric—for example, if Black Friday promotions continued to impact organic downloads after your campaign ended.

Advanced settings

Training start date

By default, the model uses three years of historical data prior to the event. In some cases, you may want to shorten this period to avoid training the model on outdated or irrelevant data. You can adjust the training start date to as early as six months before the event start date—this is the minimum historical data required for the model to make accurate predictions.

Not sure when to change the training start date? Watch the video below to see common use cases.

3. Review & run experiment

The final step is to review your experiment setup. The analysis typically takes 1 to 5 minutes. You can safely navigate to other sections of the tool while the model runs.

Results page

As mentioned above, AppTweak uses two prediction models to estimate the metric without the event: extrapolation model (default) and interpolation model. Learn more here.


How to interpret your results

🎥 Watch this video series to learn how to interpret your incrementality results with real examples.

The incrementality results page includes key indicators to quantify the impact of your marketing event:

  • Incremental effect: The percentage change in the metric due to the event. Calculated as: ((Actual metric – Predicted metric) / Predicted metric) × 100

  • P-value: Measures statistical significance. Separate p-values are available for both during the event and post-event periods.

    • < 0.05: Significant evidence of impact

    • ≥ 0.05: No significant impact

  • MedAPE (Median Absolute Percentage Error): Measures the model’s prediction accuracy. Lower values indicate more reliable predictions.

  • Event significance:

    • Significant: The event impacted the metric, based on the p-value.

    • Not significant: No measurable effect.

Types of results

  • Incremental lift – Significant positive impact of the event on the metric.

  • Inconclusive – No significant change detected.

  • Incremental drop – Significant negative impact of the event on the metric.

Graph interpretation

The graph allows you to analyze incremental impact over time on your selected metric:

  • Actual metric (solid red line) – Observed data.

  • Predicted baseline (green dashed line) – What the model forecasts would have happened without the event.

  • 95% confidence interval (green shaded area) – The model’s certainty range. A narrower band indicates higher model confidence.

🚀 Key takeaway: If the actual metric significantly exceeds the prediction, the event drove incremental growth.


FAQs

Will the prediction change over time?

No. Once an experiment runs, its results remain the same unless you modify the input parameters or date range.

Is an "Inconclusive" experiment a bad result?

Not necessarily. It may indicate that your marketing efforts didn’t create a measurable change—which can also be valuable to consider when reallocating resources.

Is the model deterministic?

Yes, both models are deterministic, meaning they will always produce the same results when given the same input data. The predictions won’t change unless you modify the training date range or input parameters.

Is there a time limit for experiments?

To ensure accuracy, event durations are capped at 31 days, with an optional post-event period of up to 31 days. This means the total analysis window cannot exceed 62 days.

Can I analyze competitors' incremental impact?

Yes! You can run an incrementality experiment on your competitors’ ASO data to learn from their app store updates and marketing activities.


With incrementality analysis, go beyond basic performance tracking to quantify the true impact of your ASO and paid efforts. Exclusive to AppTweak, this tool now helps you cut through noise to isolate real growth and make data-backed decisions with confidence.

Did this answer your question?