In an increasingly competitive B2B marketplace every department must prove its contribution. Manufacturing output is relatively easy to measure, as is the contribution of the sales department but what about the marketing department? Is it possible to measure marketing ROI?

There have never been so many tools available to measure the results of marketing activity. But do those tools deliver reliable data on which to base decisions? Is spending time and resources on collecting a mass of data the best way forward or is there a better way?

The basic ROI calculation is (Return – Investment) / Investment over a specific period of time. The return could be sales, gross profit or some other variable.

Marketing ROI could, therefore, be measured using the formula (Sales – Marketing investment) / marketing investment over a period. However, some of the sales growth may be organic. That is it would have come in regardless of marketing effort. For example simple repeat business.

A more accurate calculation may be (Sales – Organic sales – Marketing investment) / Marketing investment over a period. At a top level, this will deliver a number that should show, in broad terms, if marketing is delivering (or not).

The aim of the marketing department is simple. They need to maximise the sales number while minimising the marketing expenditure. They, therefore, need to know which of the multiple activities they undertake has the maximum impact and which are not worth the investment. Investing in the wrong areas or minimising the wrong costs will reduce the contribution of marketing.

It may be activity X appears to deliver excellent results but how do you know that is not impacted by activity Y? What if some external factor (Z) that may be nothing whatsoever to do with marketing is having an impact? Can you isolate Y and Z and thus measure X more accurately? In essence, can you really make an accurate measure of X, Y or Z?

The Marketing ROI Measurement Issue – An Example

Every year for as long as I can remember I have had at least three severe colds during the winter months. That changed a couple of years ago when I changed my diet after a health scare. Since then no more winter colds – not one.

So it would seem diet and lack of infection are related but something could have changed over the same period. I have worked from home more and therefore I am around people less and (to be blunt) less exposed to sources of infection. Could that be the reason?

Let’s assume a change of diet was the reason I then have two options. I could give thanks and not worry about which specific change (or changes) to my diet made the difference. Alternatively, I could spend time and effort trying to establish which items had an impact so I may eat more of them.

If I don’t worry about what caused the change I run the risk I could inadvertently drop a critical item from my diet and the colds may return. If I try to identify specific items that caused the change isolating those items is far from a simple task.

To reduce the complexity involved in identifying specific causes I could try to make some assumptions. However, that may introduce estimates and errors that make any conclusions I reach useless.

I could, for example, assume the change was down to an increase in the fruit (and therefore vitamin C) I eat. I could stop eating fruit and see what happens but how long do I wait before I measure results? What may happen to my health in the meantime? What about Vitamin C I could pick up from a host of other foods. How do I build a valid test with suitable controls?

A vast amount of time and effort may not deliver a definitive result. Vitamin C may indeed have an impact but what if that impact is only delivered if factors X and Y that I am not measuring (or may not even have thought of) are also present.

The Problems With Data

There are many third party tools available for measuring the impact of various elements of the marketing mix. How can you be sure that a positive change in whatever is measured is not the result of some transient event you are unaware of (or have chosen to ignore). What about the long term fuzzy objectives like brand awareness how do you measure that?

Most measurement tools make various assumptions in their calculations. How do you know what those assumptions are? Without accounting for assumptions the data could be way off. There may be a widely held belief in a business that ‘X’ is an undisputed fact. The problem is assumptions can become recognised as facts over time (almost as part of the folklore) when they have no scientific basis.

There is no such thing as perfect information so where do you stop. Data can be used as a crutch or worse still an excuse. That shouldn’t have happened! The data said otherwise. With too much data, too much analysis a business can be slow to react to what is happening in the real world.

What’s The Alternative

At the other extreme, a ‘who cares about the specific cause if it is working’ approach may be employed. The business may run on something that is difficult to quantify – feel. There may be some very broad measurements of performance in place but that is all.

The risks of this approach are obvious. There is a lack of control. Costs could spiral with no checks and balances. If results hit a downward trend is it possible to establish what needs to be fixed?

A lack of data can lead to an increase in company politics. Did sales contribute the most to bringing in the order, or was it operations. Maybe the opportunity would not have existed in the first place if it was not for marketing. Did the design/engineering department come up with such an elegant, cost effective solution to a problem that it sold itself?

On the plus side, a business run on feel tends to be more able to react to the market. They tend to be far less bogged down in data and closer to what is happening in the real world. However, businesses that succeed with this approach tend to have a specific organisational structure and culture. It tends to only work in smaller organisations or in larger organisations that can isolate specific groups or teams.

Both the ‘data’ and the ‘feel’ routes have their pluses and minuses. A mix of the two measuring a trend (and accepting its imperfections) rather than an absolute measure of marketing ROI may be a more appropriate goal.

The rough measures (or no measure) approach tends to lead to a business that is faster on its feet and more tuned in to its market but it has obvious risks. The business with perfect data (if such a thing exists) and the courage to act on that data given the external forces and internal politics that may swirl around should make excellent decisions. The trouble is those decisions may be too late.

This website uses cookies to analyse traffic and personalise content. By continuing to use our site you are agreeing to our cookie policy. Privacy policy