0 More Mobile Campaign Management Systems

It turns out the world of mobile marketing systems is further developed than I thought. (Either that, or these people are really good at putting up impressive Web sites.) A bit more research turned up a number of vendors who appear to have reasonably sophisticated mobile marketing systems. Those that seem worth noting include: MOVO, Flytxt, Kodime, MessageBuzz, Velti, Wire2Air, and Knotice. These are in addition to the firms I mentioned yesterday: Enpocket, Ad Infuse and Waterfall Mobile.

It’s hard to tell which of these are sold as software and which are platforms used in-house by mobile marketing agencies. I suspect most fall into the latter category. And, of course, without examining them closely you never know what’s real. But at a minimum we can say that several companies who understand what it takes to build a decent marketing system have applied their knowledge to mobile marketing.

So far, it seems most of these companies are mobile marketing specialists. Knotice stands out as an exception, claiming to integrate “email, web, mobile and emerging interactive TV platforms.”

Mobile is outside my current range of activities, so I don’t know how much more time I’ll be spending on the topic. But I’m glad I took a little peek—it’s interesting to see what’s going on out there.

0 Enpocket Makes Mobile Advertising Look Mature

As you know from Monday's post, I’ve been poking around a bit at mobile marketing software. One company that turned up is Enpocket, a Boston-based firm that has developed what appears—and I’m basing this only on their Web site—to be an impressively complete system for managing mobile advertising campaigns. Its two main components are a marketing engine that sends messages in pretty much any format (text, email, Web page, video, etc.), and a personalization engine that builds self-adjusting predictive models to target those messages. The marketing engine also maintains some form of customer database—again, I haven’t contacted the company for details—that holds customer preferences and permissions, predictive model scores, and external information such as demographics, billing, and phone usage.

Enpocket describes this in some detail in a white paper published last year. The paper is aimed primarily at convincing mobile phone operators to use the system themselves to market data services to their customers. This is just one small opportunity within the world of mobile marketing, but perhaps it’s a shrewd place to start since the operators have full access to internal data that might not be available to others. Other materials on the enpocket site indicate they also work with other types of advertisers. The enpocket Web site also says they now offer a content and community management module (not sure why those are lumped together) that is not mentioned in the white paper.

I don’t know what, if anything, is truly unique about enpocket. For example, Ad Infuse also uses “advanced matching algorithms” to target mobile ads. Waterfall Mobile promises interactive features (voting, polling, on-demand content delivery, etc.) that enpocket doesn’t mention. But what impresses me about enpocket is the maturity of their vision: rule-based and event-triggered campaigns linked to a customer database and automated targeting engine.

It took conventional database marketers years to reach this stage. Even Web marketers are just starting to get there. Obviously enpocket has the advantage of building on what’s already been done in other media. But there’s still a big difference between knowing what should be done, and actually doing it. While I don’t know what enpocket has actually delivered, at least they’re making the right promises.

0 Defining Process is Key to Selecting Software

Readership of this blog picks up greatly when I write about specific software products. I’d like to think that’s because I am a famous expert on the topic, but suspect it’s simply that people are always looking for information about which products to buy. Given the volume of information already available on the Internet, this seems a little surprising. But given the quality of that information, perhaps not.

Still, no matter how pleased I am to attract more readers, nothing can replace talking directly to a software vendor. And not just talking, but actually seeing and working with their software. I’ve run across this many times over the years: people buy a product for a specific purpose without really understanding how it will do what they have in mind. Then, sometimes literally within minutes of installing it, they find it isn’t what they need.

This doesn’t mean every software purchase must be preceded by a test installation. But it does mean your pre-purchase research has to be thorough enough that you understand how the software will meet your goals. Sometimes there’s enough information on their Web site to know this; sometimes it takes a sales demonstration; sometimes you have to load a trial copy and play with it. Sometimes nothing less than a true proof of concept—complete with live data and key functionality—will do.

So how do you know when you know enough to buy? That’s the tricky part. You must define what you want to the system to do—that is, your requirements—and understand what capabilities the system needs to do it. The only way I know to do this is to work through the process flow of the system: a step-by-step description of the inputs, processing and outputs needed to accomplish the desired outcome. You then identify the system capabilities needed at every stage in the process. Of course, this is harder than it sounds when systems are complicated and there are many ways to do things.

The level of detail required depends on the situation. But my point today is simply that you have to think things through and visualize how the software will accomplish your goals. If you can't yet do that, you’re not ready to make a purchase.

0 Lifetime Value is More than Another Way to Spell ROI

One of our central propositions at Client X Client is that every business decision should be measured by its impact on customer lifetime value. This is because lifetime value provides a common denominator to compare decisions that are otherwise utterly dissimilar. How else do I choose whether to invest in a new factory or improve customer service?

I was presenting this argument yesterday when I realized that you could say the same for Return on Investment. That brought me up short. Is it possible that we’re really not adding anything beyond traditional ROI analysis? Have we deluded ourselves into thinking this is something new and useful?

But remember what most ROI analyses actually look like: they isolate whatever cost and revenue elements are needed to prove a particular business case. The new factory is justified by lower product costs; better customer service is justified by higher retention rates. But each of those are just portions of the actual business impact of the investments. If the new factory produces poor quality products, it may have a negative impact on lifetime value. If better customer service only retains less profitable customers, it may also be a poor investment.

This is the reason you need to measure lifetime value: because lifetime value inherently forces you to consider all the factors that might be impacted by a decision. As my previous posts have discussed, these can be summarized along two dimensions, with three elements each: order type (new, renewal and cross sell) and financial value (revenue, promotion cost, fulfillment cost). Those combine to form a convenient 3x3 matrix that can serve as a simple checklist for assessing any business analysis: have you considered the estimated impact of the proposed decision on each cell? There’s no guarantee your answers will be correct, but at least you’ll have asked the right questions. That alone makes lifetime value more useful than conventional ROI evaluations.

0 Why You're Going to Replace the Mobile Marketing Software You Haven't Even Bought Yet

I’ve seen several (well, two) articles recently about mobile marketing software. That’s one mention short of a trend, but I figured I’d be proactive and see what was going on out there. The general idea behind the articles was that new products are now making it easier to do serious campaign management for mobile promotions.

Somewhat to my disappointment, a quick bit of Googling showed there are many more than two products already present in this space. Most seem to be SMS bulk mailers—very much the equivalent of simple software for sending direct mail or mass emails. Of course, we all know that sort of untargeted marketing is a bad idea in any channels and pretty much unthinkable in mobile marketing, where the customer pays to receive the message. So those products aren’t worth much attention.

But there do seem to be several more sophisticated products that offer advanced capabilities. I won’t mention names because I haven’t spent enough time researching the topic to understand which ones are truly important.

Still, my general thought for the day is that it's silly to have to invent the features needed for this sort of product. Surely the marketing world has enough experience by now to understand the basic features necessary to run campaigns and manage interactions. Any list would include customer profiles, segmentation, testing, response analysis, propensity modeling, and lifetime value estimates (yes that last one is special pleading; sorry, I’m obsessed). I could come up with more but the cat is sitting on my lap. The point is, it makes vastly more sense to extend current marketing systems into the mobile channel, than to build separate mobile marketing systems that will later need to be integrated.

Marketing software vendors surely see this opportunity. But to take advantage of it, they would need to invest in not merely in technology, but also in the services and expertise needed to help novice marketers enter the mobile channel. This is expensive and experts are rare. So it’s more likely the vendors will defer any major effort until standard practices are widely understood. Pity – it will mean a lot of work down the road to fix the problems now being created.

0 ClickFox Generates Detailed Experience Maps

I’m just finishing a review for next month's DM News of ClickFox, a product that visualizes the paths followed by customers as they navigate interactive voice response (IVR), kiosks, Web sites and other self-service systems. John Pasqualetto, ClickFox’s Director of Business Development, tells me the product is easy to sell: basically, the company loads a week’s worth of interaction logs, plots them onto a model that classifies the different types of events, and shows customer paths through that model to the prospective buyer. “The value jumps right out,” according to John. “Users say, ‘I’ve never seen my data presented this way before.’”

If you think this sounds similar to the funnel reports provided by most Web analytics systems, so do I. One difference is the visual representation: it's hard to describe this in words, but if you look at the ClickFox Web site, you’ll see they organize their models around the purpose of the different Web pages or IVR options. Essentially, they build a conceptual map of the system. Web analytics systems generally take a more mechanical approach, arranging pages based on frequency of use or links to other pages. This makes it harder to grasp what’s actually going on from the customer viewpoint. (On the other hand, the Web systems can usually drill down to display the actual pages themselves, which ClickFox does not. Web systems also deal with higher data volumes than IVRs.)

The ClickFox approach is also somewhat similar to Client X Client's beloved Customer Experience Matrix, which also plots interactions by purpose. But we generally work at a higher level—that is, we’d look at an IVR session as a single transaction, rather than breaking it into its components. We also think in terms of a standard set of purpose categories, rather than defining a custom set in each situation. (Of course, custom sets make sense when you’re analyzing at the ClickFox level of detail.) So ClickFox would be complementary rather than competitive with what we do. Otherwise, I would not have been able to review them.

What’s really important in all this is that ClickFox provides another good tool for Customer Experience Management. The more of those, the better.

0 Survey Highlights Interest in Marketing Performance Measurement

According to The CMO Council, “the majority of marketers feel that their top goal in 2007 is to quantify and measure the value of marketing programs and investments (43.8%)” and “respondents tapped [marketing] performance dashboards as the top automated solution to be deployed in 2007.”

This is happy news for Client X Client, since that’s the pond we swim in. The Grinch in me points out that 43.8% is not a majority and that the actual survey question asked about “issues or challenges”, not goals. And if I get really cranky, I remember that the CMO Council runs a Marketing Performance Measurement Forum and Mastering MPM Online Certificate program—so there is probably some bias in the membership and perhaps their survey technique.

But despite these caveats, it’s good to see that performance measurement ranks high in the lists of concerns and system plans. The survey, of 350 marketers (we don’t know how many responded), also covered top accomplishments in 2006 (number one: “restructured and realigned marketing’), organizational plans (top item: “add new competencies and capabilities”), progress in improving the perception of marketing within their company (a Lake Wobegone-like 67.7% are "above average"), span of marketing authority, agency relationship changes, and sources of information and advice. It’s interesting stuff and available for free (registration required).

0 Proving the Value of Site Optimization

Eric’s comment on yesterday’s post, to the effect that “There shouldn’t be much debate here. Both full and fractional designs have their place in the testing cycle” is a useful reminder that it’s easy to get distracted by technical details and miss the larger perspective of the value provided by testing systems. This in turn raises the question posed implicitly by Friday’s post and Demi’s comment, of why so few companies have actually adopted these systems despite the proven benefits.

My personal theory is it has less to do with a reluctance to be measured than a lack of time and skills to conduct the testing itself. You can outsource the skills part: most if not all of the site testing vendors have staff to do this for you. But time is harder to come by. I suspect that most Web teams are struggling to keep up with demands for operational changes, such as accommodating new features, products and promotions. Optimization simply takes a lower priority.

(I’m tempted to add that optimization implies a relatively stable platform, whereas things are constantly changing on most sites. But plenty of areas, such as landing pages and check out processes, are usually stable enough that optimization is possible.)

Time can be expanded by adding more staff, either in-house or outsourced. This comes down to a question of money. Measuring the financial value of optimization comes back to last Wednesday's post on the credibility of marketing metrics.

Most optimization tests seem to focus on simple goals such as conversion rates, which have the advantage of being easy to measure but don’t capture the full value of an improvement. As I’ve argued many times in this blog, that value is properly defined as change in lifetime value. Calculating this is difficult and convincing others to accept the result is harder still. Marketing analysts therefore shy away from the problem unless pushed to engage it by senior management. The senior managers themselves will not be willing to invest the necessary resources unless they believe there is some benefit.

This is a chicken-and-egg problem, since the benefit from lifetime value analysis comes from shifting resources into more productive investments, but the only way to demonstrate this is possible is to do the lifetime value calculations in the first place. The obstacle is not insurmountable, however. One-off projects can illustrate the scope of the opportunity without investing in a permanent, all-encompassing LTV system. The series of “One Big Button” posts culminating last Monday described some approaches to this sort of analysis.

Which brings us back to Web site testing. Short term value measures will at best understate the benefits of an optimization project, and at worst lead to changes that destroy rather than increase long term value. So it makes considerable sense for a site testing trial project to include a pilot LTV estimate. It’s almost certain that the estimated value of the test benefit will be higher when based on LTV than when based on immediate results alone. This higher value can then justify expanded resources for both site testing and LTV.

And you thought last week’s posts were disconnected.

0 Is Taguchi Good for Multivariate Testing?

I’ve spent a lot of time recently talking to vendors of Web site testing systems. One topic that keeps coming up is whether Taguchi testing—which tests selected combinations of variables and infers the results for untested combinations—is a useful technique for this application. Some vendors use it heavily; some make it available but don’t recommend it; others reject it altogether.

Vendors in the non-Taguchi camp tell me they’ve done tests comparing Taguchi and “full factorial” tests (which test all possible combinations), and gotten different results. Since the main claim of Taguchi is that it finds the optimum combination, this is powerful practical evidence against it. On the theoretical level, the criticism is that Taguchi assumes that there are no interactions among test variables, meaning results for each variable are not affected by the values of other variables, when such interactions are in fact common. Moreover, how would you know whether interactions existed if you didn’t test for them? (Taguchi tests are generally too small to find interactions.)

Taguchi proponents might argue that careful test design can avoid interactions. But the more common justification seems to be that Taguchi makes it possible to test many more alternatives than conventional A/B tests (which change just one item at a time) or full-factorial designs (which need a lot of traffic to get adequate volume for each combination.)

So, the real question is not whether Taguchi ignores interactions (it does), but whether Taguchi leads to better results more quickly. This is possible even if those results not optimal, because Taguchi lets users test a wider variety of options with a given amount of traffic. I’m guessing Taguchi does help, at least for sites without huge visitor volumes.

Incidentally, I tried to do a quick classification of which vendors favor Taguchi. But it’s not so simple, because even vendors who prefer other methods still offer Taguchi as an option. And some alternative methods can be seen more as refinements of Taguchi than total rejections of it. So I think I’ll avoid naming names just now, and let the vendors speak for themselves. (Vendors to check: Offermatica, Optimost, Memetrics, SiteSpect, Vertster.)

0 The Market for Web Testing Systems is Young

I’ve received online newsletters in the past week from three Web optimization vendors: [x+1], Memtrics and Optimost. All are interesting. The article that particularly caught my attention was a Jupiterresearch report available from [x+1], which contained results from a December 2005 survey of 251 Web site decision makers in companies with $50 million or more in annual revenues.

What was startling about the report is its finding that 32% of the respondents said they had deployed a “testing/optimization” application and another 28% planned to do so within the next twelve months. Since a year has now passed, the current total should be around 60%.

With something like 400,000 U.S. companies with $50 million or more in revenue, this would imply around 200,000 installations. Yet my conversations with Web testing vendors show they collectively have maybe 500 installations-—certainly fewer than 1,000.

To put it mildly, that's a big difference.

There may be some definitional issues here. It's extremely unlikely that Jupiterresearch accidentally found 75 (32% of 251) of the 500 companies using the testing systems (especially since the total was probably under 500 at the time of the survey). So, presumably, some people who said they had testing systems were using products not on the standard list. (These would be Optimost, Offermatica, Memetrics, SiteSpect and Vertster. Jupiterresearch adds what I label as “behavioral targeting” vendors: [x+1] and Touch Clarity (now part of Omniture), e-commerce platform vendor ATG, and testing/targeting hybrid Kefta.) Maybe some other responders weren’t using anything and chose not to admit it.

But I suspect the main factor is sample bias. Jupiterresearch doesn’t say where the original survey list came from, but it was probably weighted toward advanced Web site users. As in any group, the people most involved in the topic are most likely to have responded, further skewing the results.

Sample bias is a well know issue among researchers. Major public opinion polls use elaborate adjustments to compensate for it. I don’t mean to criticize Jupiterresearch for not doing something similar: they never claim their sample was representative or that the numbers can be projected across all $50 million+ companies.

Still, the report certainly gives the impression that a large fraction of potential users have already adopted testing/optimization systems. Given what we know about the vendor installation totals, this is false. And it’s an error with consequence: vendors and investors act differently in mature vs. immature markets. Working from the wrong assumptions will lead them to costly mistakes.

Damage to customers is harder to identify. If anything, a fear of being the last company without this technology may prompt them to move more quickly. This would presumably be a benefit. But the hype may also lead them to believe that offerings and vendors are more mature than in reality. This could lead them to give less scrutiny to individual vendors than they would if they knew the market is young. And maybe it’s just me, but I believe as a general principle that people do better to base their decisions on accurate information.

I don’t think the situation here is unique. Surveys like these often give penetration numbers that seem unrealistically high to me. The reasons are probably the same as the ones I’ve listed above. It’s important for information consumers to recognize that while such surveys are give valuable insights into how users are behaving, they do have their limits.

0 Is SiteSpect Really Better? How Would You Know?

Tuesday’s post and subsequent discussion of whether SiteSpect’s no-tag approach to Web site testing is significantly easier than inserting Javascript tags has been interesting but, for me at least, inconclusive. I understand that inserting tags into a production page requires the same testing as any other change, and that SiteSpect avoids this. But the tags are only inserted once, either per slot on a given page or for the page as a whole. After this, any number of tests can be set up and run on that page without additional changes. And given the simplicity of the tags themselves, are they unlikely to cause problems that take a lot of work to fix.

Of course, no work is easier than a little work, so avoiding tags does have some benefit. But most of the labor will still be in setting up the tests themselves. So the efficiency of the set up procedure will have much more impact on the total effort required to run a Web testing system than whether or not it uses tags. I’ve now seen demonstrations of all the major systems—Offermatica, Memetrics, Kefta, Optimost and SiteSpect—and written reviews of the first three (posted in my article archive). But even that doesn’t give me enough information to say one is easier to work with than another.

This is a fundamental issue with any kind of software assessment. You can talk to vendors, look at demonstrations, compare function lists, and read as many reviews as you like, but none of that shows what it’s like to use a product for your particular projects. Certainly with the Web testing systems, the different ways that clients configure their Web sites will have a major impact on whether a particular product is hard or easy to use. Deployment effort will also depend on what other systems are part of the site, as well as the nature of the desired tests themselves.

This line of reasoning leads mostly towards insisting that users should run their own tests before buying anything. That’s certainly sound advice: nobody ever regretted testing a product too thoroughly. But testing only works if you understand what you’re doing. Buyers who have never worked with a particular type of system often won’t know enough to run a meaningful test. So simply to proclaim that testing is always the solution isn’t correct.

This is where vendors can help. The more realistic a simulation they can provide of using their product, the more intelligently customers can judge whether the product will work for them. The reality is that most customers’ needs can be met by more than one product. Even though customers rightly want to find the best solution, all they really need is to find one that’s adequate and get on with their business. The first vendor to prove they can do the job, wins.

Products that claim a unique and substantial advantage over competitors, like SiteSpect, face a tougher challenge. Basically, no one believes it when vendors say their product is better, simply because all vendors say that. So vendors making radical claims must work hard to prove their case through explanations, benchmarks, case studies, worksheets, and whatever else it might take to show that the differences (a) really exist and (b) really matter. In theory, head-to-head comparisons against other vendors are the best way to do this, but the obvious bias of vendor-sponsored comparisons (not to mention potential for lawsuits) makes this extremely difficult. The best such vendors can do is to state their claims clearly and with as much justification as possible, and hope they can convince potential buyers to take a closer look.

0 Just as You Always Suspected: Nobody Believes Marketing Effectiveness Measures

I consider it a point of honor not to simply reproduce a vendor’s press release. So when Marketing Management Analytics sent one headed “Most Financial Executives Critical of Marketing Effectiveness Measures: Only 7% Say They are Satisfied with their Company's Ability to Measure Marketing ROI”, I asked to see the details. In this case, it turned out that not only did the press release represent the study accurately, but it also picked up on the same two points that I found most intriguing:

- “only 7% of senior-level financial executives surveyed report being satisfied with their company's ability to measure marketing ROI”, compared with 23% of marketers in a similar earlier survey; and,

- “only one in 10 senior-level financial executives report confidence in marketing's ability to forecast its impact on sales” compared with one in four marketers.

And that, my friends, is the problem in a nutshell: financial managers have almost no confidence in marketing measurements, and marketers don't even realize how bad things are.

With numbers like these, is it any wonder that advanced concepts like customer experience management attract so little executive support? Nobody is willing to take a risk on them because nobody believes the supporting analysis. Note that three-quarters of the marketers themselves are not confident in their measurements.

In fact, the one other really interesting tidbit in the financial executive detail was that “customer value measurements” ranked a surprisingly high number three (34.6%) in the list of marketing effectiveness metrics. Only “effectiveness of marketing driving sales” (52.2%) and “brand equity and awareness” (44.1%) were more common. “Return on marketing investments” (25.7%) and “contribution” (22.8%) ranked lower.

It makes sense to me that “driving sales” would be the most common measure; after all, it is easy to understand and relatively simple to measure. But impact on brand equity and customer value are much more complicated. I do find it odd that they are almost as popular. I’m also trying to reconcile this set of answers with the fact that so few respondents had any confidence in any type of measurement: what exactly does it mean to rely on a measure that you don’t trust?

All in all, though, this survey represents an urgent warning to marketers that they must work much harder to build credible measures for the value of their activities.

(Both surveys were funded by MMA, which provides marketing mix models and other marketing performance analytics. The financial executive survey was conducted with Financial Executives International while the marketer survey was made with the Association of National Advertisers.)

0 SiteSpect Does Web Tests without Tags

I had a long and interesting talk yesterday with Larry Epstein at SiteSpect, a vendor of Web site multivariate testing and targeting software. SiteSpect’s primary claim to fame is they manage such tests without inserting any page tags, unlike pretty much all other vendors in this space. Their trick, as I understand it, is to use a proxy server that inserts test changes and captures results between site visitors and a client’s Web server. Users control changes by defining conditions, such as words or values to replace in specified pages, which the system checks for as traffic streams by.

Even though defining complex changes can take a fair amount of technical expertise, users with appropriate skills can make it happen without modifying the underlying pages. This frees marketers from reliance on the technical team that manages the site. It also frees the process from Javascript (which is inside most page tags), which doesn’t always execute correctly and adds some time to page processing.

This is an intriguing approach, but I haven’t decided what I think of it. Tagging individual pages or even specific regions within each page is clearly work, but it’s by far the most widely used approach. This might mean that most users find it acceptable or it might be the reason relatively few people use such systems. (Or both.) There is also an argument that requiring tags on every page means you get incomplete results when someone occasionally leaves one out by mistake. But I think this applies more to site analytics than testing. With testing, the number of tags is limited and they should be inserted with surgical precision. Therefore, inadvertent error should not be an issue and the technical people should simply do the insertions as part of their job.

I’m kidding, of course. If there’s one thing I’ve learned from years of working with marketing systems, it’s that marketers never want to rely on technical people for anything—and the technical people heartily agree that marketers should do as much as possible for themselves. There are very sound, practical reasons for this that boil down to the time and effort required to accurately transfer requests from marketers to technologists. If the marketers can do the work themselves, these very substantial costs can be avoided.

This holds true even when significant technical skills are still required. Setting up complex marketing campaigns, for example, can be almost as much work in campaign management software as when programmers had to do it. Most companies with such software therefore end up with experts in their marketing departments to do the setup. The difference between programmers and these campaign management super users isn’t really so much their level of technical skill, as it is that the super users are part of the marketing department. This makes them both more familiar with marketers’ needs and more responsive to their requests.

Framing the issue this way puts SiteSpect’s case in a different light. Does SiteSpect really give marketers more control over testing and segmentation than other products? Compared with products where vendor professional services staff sets up the tests, the answer is yes. (Although relying on vendor staff may be more like relying on an internal super user than a corporate IT department.) But most of the testing products do provide marketing users with substantial capabilities once the initial tagging is complete. So I’d say the practical advantage for SiteSpect is relatively small.

But I’ll give the last word to SiteSpect. Larry told me they have picked up large new clients specifically because those companies did find working with tag-based testing systems too cumbersome. So perhaps there are advantages I haven’t seen, or perhaps there are particular situations where SiteSpect’s no-tag approach has special advantages.

Time, and marketing skills, will tell.

0 One Big Button is Built

I did go ahead and implement the “One Big Button” opportunity analysis in my sample LTV system (see last week's posts for details). As expected, it took about a day’s work, mostly checking that the calculations were correct. That still left the challenge of finding report layouts that lead users through the results. There is no one right way to do that, of course. QlikTech makes it easy to experiment with alternatives, which is a mixed blessing since it’s perhaps too much fun to play with.

My final (?) version shows a one line summary plus details for three types of changes (acquisition, renewal/retention, and cross sell), each split into recommendations for increased vs. decreased investment. Users can drill down to see details on the individual products and sources. That should tell them pretty much what they need to know.

I was eager to see the results of the calculations—remember, I’m working with live data—and was pleased to see they were reasonable: the system proposed changes to fewer than half the total products and estimated a 10% increase in value. Claims of huge potential improvement would have been less credible.

That left just one question: what should be on the One Big Button itself? The color choice was easy—a nice monetary green. But “Make more money!” seems a bit crass, while “Recommendations” sounds so bland. Since the button label can be a formula, I ended up calculating the estimated value of the opportunities and displaying “How can I add $2,620,707 more profit?” If that doesn’t get their attention, I don’t know what will.

0 Convincing Managers to Care about Customer Value Measures

I spoke earlier this week at the DAMA International Symposium and Wilshire Meta-Data Conference, which serves a primarily technical audience of data modelers and architects. My own talk was about applications for customer value metrics, which boiled down to lifetime value applications and building them with the Customer Experience Matrix. (In fact, preparation for this talk is what inspired my earlier series of posts on that topic.)

One of the questions that came up was how to convince business managers that this sort of framework is needed. I’m not sure I gave a particularly coherent answer at the time, but this is in fact something that Client X Client has given a fair amount of thought. The correct (if cliched) response is that different managers have different needs, so you have to address each person appropriately.

CEOs, COOs and other top managers are looking at company-wide issues. Benefits that matter to them include:

- understanding how customers are being treated across different parts of the organization. Of course, this “customer eye view” is the central proposition of both the Customer Experience Matrix and customer experience management in general. But in practice it’s still very hard to come by, and good CEOs and COOs recognize how desperately they need it.

- gaining metrics for customer experience management. I’ve made this point many times in this blog but I’ll say it again: the only way to focus an organization on customer experience is to measure the financial impact of that experience. Top managers understand this intuitively. If they really believe customer experience is important, they’ll eagerly adopt a solution that provides such measures.

- identify opportunities for improvement. Measuring results is essential, but managers want even more to know where they can do better. This comes back to the One Big Button I’ve been writing about all week. The Customer Experience Matrix and other customer value approaches offer specific techniques to surface experience improvement opportunities and estimate their value.

- optimize resource allocation. Choosing where to direct limited resources is arguably the central job of senior management. Impact on customer value is the one criterion that can meaningfully compare investments throughout the company. It offers senior managers both a tool for their own use and a communication mechanism to get others in the company thinking the same way.

Chief Financial Officers share the CEO’s a company-wide perspective but look at things from a financial viewpoint. For them, customer value approaches offer:

- new business insights from new metrics. Although the CFO’s job is to understand what’s happening in the business from financial data, the information from traditional financial systems is really quite limited. Customer value measures organize information in ways that reveal patterns and trends in customer behavior which traditional measures do not.

- better forecasting. Forecasts based on individual customers or customer segments can be significantly more accurate than simple projections based on aggregate trends or percentage changes. Forecast quality has always been important but it’s even more of a hot button because of Sarbanes-Oxley and other corporate governance requirements.

- cross-function Return on Investment measures. CFOs are ultimately responsible for ensuring that ROI estimates are accurate. Customer value metrics help them to identify the impact of investments across departments and over time. These effects are often hidden from departmental managers who would otherwise prepare estimates based only on the impact within their own area.

Marketing departments gain substantial operational benefits from customer value measurements and the Customer Experience Matrix. These include:

- better ways to visualize, control and coordinate customer treatments. Different departments and systems execute treatments in different channels and stages of the product life cycle. Bringing information about these together in one place is a major challenge that the Customer Experience Matrix in particular helps to meet. Applications range from setting general experience strategies to managing interactions with individual customers.

- monitor customer behavior for trends and opportunities. A rich set of customer value measures will highlight important changes as quickly as they occur. On a strategic level, the Customer Experience Matrix identifies the value (actual and potential) of every step in the purchase cycle to ensure companies get the greatest possible return from every customer-facing event.

- measure return on marketing investments. Customer value measurements give marketers the tools they need to prove the value of their expenditures. This improves the productivity of their spending while ensuring they can justify their budgets to the rest of the company.

Customer Service managers, like marketers, deal directly with customers and need tools to measure the effectiveness of their efforts. Benefits for them include:

- visualization of standard contact policies and of individual contact histories. The Customer Experience Matrix provides tools to track the flow of customers through product purchase and use stages, to see the specific treatments they receive, and to display individual event histories to an agent or self-service system as an interaction occurs. All this helps managers to understand and improve how customers are treated.

- identify best treatment rules based on long-term results. Customer value measurements can show the impact of each treatment on long-term value. Without them, managers are often stuck looking only at immediate results or have no result information at all. Having a good measurement system in place makes it easy for managers to continually test, evaluate and refine alternative treatments.

- recommend treatments during interactions. The optimal business rules discovered by customer value analysis can be deployed to operational systems for execution. A strong customer value framework will support on-the-fly calculations that can adjust treatment recommendations based on information gathered during the interaction itself.

If there’s a common theme to all this, it’s that customer value measurement gives managers at all levels a new and powerful tool to quantify the impact of business decisions on long-term value. Let me try that again: in plain English, it helps them make more money. If that’s not a compelling benefit, I don’t know what is.