As fellow marketers, I’m sure you’ll agree that advances in measurement and reporting technologies have revolutionised the field. However, with these advances has come an increased pressure to justify where the budget is spent.
This series is aimed at addressing marketers on the performance metrics that matter when reporting different types of experiential campaign. Today, the focus is product sampling!
The Aim Of Product Sampling
Sampling is about improving awareness, perceptions and sales of a product by encouraging trial through free samples.
Clearing up the confusion:
One of the most prevalent issues in experiential reporting is not a lack of measurement, but rather a lack of depth in the measurements. Performance metrics are used for generating deep insights into the effects of campaigns.
Thus, whilst metrics like ‘samples distributed’ are undeniably useful statistics, they tell us nothing meaningful about the impact of the campaign. The onus is on marketers to determine how stats like this can be used to develop metrics which provide actionable insights – the metrics that matter! Fortunately for marketers reading this, we’ve already done that, so all you need to do is keep reading.
Metrics That Matter
NPS (Net Promoter Score)
According to Medallia, “The Net Promoter Score is an index ranging from -100 to 100 that measures the willingness of customers to recommend a company’s products or services to others.”
NPS is perfect for sampling campaigns, as it shows how likely consumers are to actively recommend the product. Not only does this give insight into brand perceptions, it also helps marketers predict the true reach of the campaign and the influence it had on consumer behaviour.
For more detail on how to calculate Net Promoter Score, visit the Net Promoter Network.
Long-term sales uplift & Customer Churn Rate
Comparing initial sales uplift with long-term figures demonstrates a campaign’s efficacy in creating valuable, repeat purchase clients. Metrics like these also help marketers identify areas for improvement within campaigns.
Similarly, customer churn rate (Number of consumers lost in a given period/number of consumers at the start of the period) indicates the percentage of consumers who do not continue to repeat purchase. Identifying who these customers are is the first step to understanding why they chose not to repeat purchase.
Sale acquisition cost
Sales acquisition cost is calculated by dividing total campaign costs by sales increase, this metric is incredibly useful when justifying spend to directors. By ascribing an exact monetary cost to each sale acquired it is plain to see whether or not repeating the campaign will be beneficial. This metric is also useful to include when reporting a sales-driven campaign.
Social media activity
Increasing levels of social media activity are demonstrative of both increased awareness and positive perceptions of a brand. New followers and mentions illustrate an increase in positive perception of the brand, as they represent the consumers’ desire to be associated with the brand. Similarly mentions, co-created content and share increase brand awareness through earned promotion.
N.B. Whilst shares are pertinent in this instance, stats like impressions and likes are often seen as throwaway “vanity” metrics as they are not necessarily demonstrative of a meaningful engagement.
Typically associated with digital, conversion rate represents the percentage of sample distributions which result in sales uplift. This can be calculated by dividing the total sales uplift by the number of samples distributed. Whilst there is the potential for an extraneous variable to impact this figure, the conversion rate is a key metric when measuring sales impact.
In addition to this, when running large sampling campaigns across multiple stores, conversion rate helps identify which locations were most successful, which, in turn, cues further investigation.
Because we’re good people here at eventeem, I’ve included two ‘honourable mention’ metrics that matter below. These measurements could have easily found themselves in the top 5, but they’re not always applicable through time or campaign restrictions.
Applicable when: Running a promotion alongside sampling.
Keeping track of how many promotions are redeemed and in what areas helps marketers develop customer and location profiles which can be used to identify missteps and improve future campaigns, similar to conversion rate.
In addition, for a promotion-centric campaign, promotions redeemed can be interchanged with sales for metrics like conversion rate and sales acquisition cost to gain deeper insight into the value of the promotion.
Applicable when: Interaction time and staff numbers permit.
Customer feedback is always valuable and can give immediately actionable insights into a campaign’s performance. The primary problem with collecting this data is that it requires a greater time commitment from both staff and consumers which may not be feasible for everyone.
The primary difference between this and the other metrics mentioned is that consumer feedback can be qualitative or quantitative. Whilst both have their merits, I recommend a balanced mix; quantitative questions give objective, structured responses which are useful when reporting; whilst qualitative questions give consumers the chance to answer freely, generating insights and suggestions which would never surface from a structured question.
Test, measure, learn
It is vital to remember, whilst metrics are interchangeable depending on the campaign, the process remains the same, and learning is by far the most important link in the chain.
The idea of a campaign as a single entity is finished, with a wealth of insights and tools available to the modern marketer, you must do everything in your power to stay ahead of the curve, to say learning from your previous experiences plays a part in this is a colossal understatement.
In what has become a game of inches, data generated through your own campaigns is very much the low hanging fruit, being both free and exclusive.
By now you should have a solid understanding of which metrics best represent the results of a sampling campaign, as well as, a decent grasp of how stats can be chopped and changed within metrics to represent results using the same format.
At the very least, I hope to have given you some ammunition for the next time you’re staring down the barrel in the board room. As always if you have any questions about metrics that matter, or anything I’m available here, on twitter, LinkedIn or contact me directly at email@example.com