SKAdNetwork (SKAN) and MMP (mobile measurement partner) attribution metrics don’t agree with each other – and probably never will. This is well known but has been easy to ignore while MMP fingerprinting remains the preferred metric for many app marketing teams.

With SKAN set to become the default iOS measurement tool in 2024, marketers increasingly need to understand how to make sense of SKAN metrics – which can sometimes tell quite a different story to MMP numbers.

For some campaigns, advertisers may in fact see significantly fewer paid attributions – indicating partners being falsely credited for some organic traffic. In others, SKAN can also catch paid attributions which MMPs did not.

Below, we highlight the main reasons that MMP and SKAN data can disagree and explain how teams should approach ad measurement in the new world.

Install tracking: why MMPs can report more paid installs vs. SKAN

If SKAN reports fewer installs compared to the MMP, this may be due to false positives (aka overattribution) resulting from probabilistic fingerprinting.

It’s no secret that the accuracy of probabilistic “fingerprinting” has always been below 100%. Now, following the collapse of deterministic tracking it has become easy to forget that some share of organic conversions are falsely matched to paid campaigns.

A common scenario for such over-attribution is when many similar devices fall under the same internet IP address. Examples for why this happens are countless: apartment buildings and hotels, shopping malls, restaurants, and sports venues.

This is in contrast to SKAN, which will only award an attribution if the device in question came into contact with a relevant advert.

Redownloads: SKAN reports more than MMPs

Another key difference is that SKAN often labels a larger proportion of installs as redownloads when compared with MMPs.

MMPs will report “fresh” installs in many cases where a user had previously installed the app – provided that happened outside of a defined window, which can be configured by the advertiser.

Conversely, SKAN seems to remember all prior downloads for each Apple ID, labeling all future installs as redownloads where relevant.

According to an App Store representative, an Apple ID’s “purchase history” is used to label “any subsequent app installs” as redownloads.

For advertisers keen on re-engaging past users, it’s important not to ignore redownloads when considering SKAN campaign performance. Remember that many of the installs labeled as “redownloads” by Apple would in fact show as fresh installs in MMP reports.

Multi-touch attribution (assisted installs)

Like MMPs, SKAN widely supports multi-touch attribution. Assisted install reporting was introduced via the “did-win” flag introduced in v3.0.

SKAN supports up to five non-winning ad networks which contributed to the install, going slightly beyond most MMPs here (AppsFlyer, for example, attributes up to three).

Because SKAN reporting is always deterministic, assist reporting should generally be more accurate. This is in contrast to MMP solutions, which prioritize different attribution methods according to their accuracy.

For example: an older click matched deterministically will beat a more recent engagement relying on a probabilistic match. In SKAN, both engagements would have been tracked deterministically, resulting in the most recent one winning.

SKAN assists also help elucidate what’s really happening with self-reporting networks (SRNs), showing whether the network(s) in question really did win the attribution or not, without relying on trusting each network’s own matching algorithms.

View-through: SKAN is more restrictive, but more reliable

Advertisers often perceive view-through attribution as problematic due to the negative impact of false attributions, especially in the case of banner campaigns. The low cost of this format makes it inexpensive to expose a large number of users to ads.

This presents a major challenge for fingerprinting, which most MMPs address by significantly restricting their view-through attribution window to around 1-8 hours post-view.

Since SKAN matching is always deterministic, the view-through attribution window can be much longer – up to a full 24 hours. This eliminates the risk of over-attribution which has historically made advertisers skeptical of view-through metrics.

Although the advertiser may not know for sure how much an individual advert had on the conversion outcome, they can at least know for sure that the user was exposed to the ad in question.

This makes SKAN an excellent input to downstream methodologies like Media Mix Modelling, which can subsequently be used to evaluate the true impact of the marketing on the measured outcomes.

Post-install conversion differences

One of the key differences – and limitations – of SKAN-only measurement is that advertisers can no longer expect a granular breakdown of revenue per paid media channel.

This is because SKAN only permits up to three post-install conversion events to be reported within predefined time windows. This means that advertisers should expect SKAN only to report a subset of the true number of conversion events.

Accordingly, assessing absolute return on ad spend (ROAS) is no longer realistic, since total revenue reported using SKAN will always look lower than the truth. SKAN revenue metrics should therefore be viewed in terms of trends, rather than absolute comparisons – is the SKAN revenue increasing or decreasing for this channel? And how does this compare to total user revenue (paid and organic) reported outside of SKAN?

This is the area in which advertisers will be most tempted to continue to rely on MMP fingerprinting – but it’s worth recalling that the absolute revenue metrics given by the MMP also reflect an estimated state, given the limitations of fingerprinting.

Conclusion

SKAdNetwork presents advertisers with the opportunity to improve campaign measurement accuracy, but comes at the cost of highly granular reporting that we all became used to before the privacy era in advertising.

The differences explored above highlight the fact that SKAN can not – and will not – report the same number of conversions as MMPs. 

It is essential to understand why apples-to-apples comparison with the past data is impossible, and how to interpret the new data you see in SKAN reports. This knowledge will help advertisers as they jump into the deep end with campaigns which are measured and optimized with SKAN.