Why Your Customer Success Managers Need Comparative Insights for Each Customer

“If you’re not getting better, you’re getting worse.”

What do executives mean when they use this phrase? They know that if their company stays as they are, eventually they’d get worse compared to their competitors.  Their competitors’ success forces them into changing how they perform, and vice versa. Hence, their company has to keep getting better, even if it isn’t sick.

Take a look at Salesforce.com in 2004. They had 20,000 customers of their cloud-based CRM software, up from about 6,000 a few years earlier. They were rapidly on their way to reaching unicorn status of a $1B market cap. But they were losing 8% of their subscribers each month. Yikes! That’s a big problem when a recurring revenue business depends on growing revenue from its customer base. Salesforce responded by changing from
reactively to proactively managing its existing customer relationships. They eventually succeeded in reducing customer churn to 1% per month and the loss of customers to their competitors. Their competitors had to follow suit to keep up. Now, proactive customer success management is one of the hottest movements in business today.

What does this mean for your Customer Success?
All of your customers need to keep improving, too, or they’ll fall behind. And your customers know this. Consequently, they need to know how they’re doing and where they can improve versus others like them. These comparative insights motivate improvement actions inside their companies, which leads to achieving their desired business outcomes — including staying competitive.

If your company offers cloud-based, recurring revenue solutions such as a SaaS subscription, then you have data that’s a byproduct of your customer relationships and their usage of your solutions. As your customers are buying the same solutions for similar business reasons, this data contains a rich source of comparative insights for each customer – insights they can’t get anywhere else.

Your customers can use these unique insights to improve. That’s important to you, too. If they’re not improving, then their outcomes, health and loyalty won’t improve, and you won’t achieve the retention and upselling rates that your company needs to grow.

Comparative insights help Customer Success Managers do their job better.
Today, your CSMs likely use customer data to gain a full picture of an account and to evaluate a customer’s likelihood for renewing, buying more or advocating your solution. This customer insight serves your company well, whereas comparative insights serve your customers well. With unique comparative insights, CSMs can deliver more value to each customer in each interaction. They’ll get a better response to their proactive outreaches, have more strategic conversations with key stakeholders during quarterly business reviews and, most importantly, provoke customer actions to improve their behaviors with your solutions.

Generating comparative insights
The process of identifying areas where one can improve by comparing to others is called benchmarking. With your own data, your CSMs can use benchmarking, too. You can create scorecards with a spreadsheet or CSM software product to rank customers and do simple performance comparisons on individual metrics. If you have relevant benchmarks or targets, CSMs can draw additional insights. But the really noteworthy, action-provoking insights are much harder to find. You’ll need data analysts or data scientists who can perform deep, comparative analysis involving complex correlations, comparisons and clustering across multiple dimensions. Unfortunately, they’re in short supply and CSMs have to wait in line to get what they need.

As a result, CSMs settle for rankings and benchmarks without the deeper analysis, which has limited value. It’s like they have Consumer Reports’ published rankings and performance ratings of cars but without the written analysis for each.

An easy, scalable way to get comparative insights
I’m quite excited that, just last week, my company OnlyBoth announced a new customer analytics solution that can help CSMs take full advantage of the unique insights that exist in their company’s data. OnlyBoth’s Customer Benchmarking Engine uses artificial intelligence to completely automate the searching, analysis and reporting of comparative insights in data. CSMs can simply enter a customer name and get numerous comparative insights in seconds. The software performs a massive, sophisticated data analysis that would take many data scientists many months to do.

Here’s an example of a deep, comparative insight the engine found in the data we gave it. This insight is written up by the software using its natural language generation technology.

insight-example-1

Rhynyx is amongst a group of 55 customers who are using a vendor’s HR software suite more actively than their other customers. However, Rhynyx is not keeping pace with their use of the recruiting module, a core sticky function of the vendor’s HR suite. A CSM can use this insight to proactively engage Rhynyx to examine the root causes and take action so they can achieve similar outcomes to those 55 peer customers.

You can see more examples and learn about our software here.

As someone who sits at the intersection of customer success leadership and benchmarking innovation, I hope to share more about this important but under-discussed topic in future posts. If you have experiences and lessons learned with using benchmarking and comparative insights for Customer Success, I’d love to hear from you.

Jim Berardone

Benchmarking the Tax Systems of 195 Countries

Taxation at the national level is controversial. Economists gather data and form opinions, and so do politicians. Factual, comparative insights on worldwide tax systems are needed. So we applied an automated Benchmarking Engine (taxes.onlyboth.com) to tax data on 195 countries, uncovering 7,617 insights or about 39 insights each, in perfect English, all automated.

paying taxes

Paying the Tax (Collector), by Pieter Brueghel the Younger

The U.S. Agency for International Development publishes a fascinating Collecting Taxes Database on the tax systems of the world’s countries.  The database expresses 33 attributes relating to various metrics and traits, relating to tax rates, efficiency in collecting the revenue that the rates target, diversity in sources of tax revenue (VAT, personal income, corporate, etc.), tax administration, and so on.

We downloaded the latest available version (2012-2013) as well as an earlier 2009-2010 version, in order also to express changes over a three-year interval and enable benchmarking on trends.

The U.S. corporate-tax system is controversial because of its very high rate.  Does the engine find any noteworthy insights relating to corporate taxation?  Indeed it does, which we’ll quote at length:

USA has the lowest corporate income tax productivity (0.07) of the 32 nations with at least 9.3% personal income tax collection as a percentage of GDP (USA is at 11.8%). That 0.07 compares to an average of 0.24 and standard deviation of 0.24 across the 32 nations.

Reaching the average of 0.24 would imply a total increase of 5.9% (absolute) in corporate income tax collection as a percentage of GDP.

USA has these standings among those 32 nations:
corporate income tax collection as a percentage of GDP = 2.6% (8th-least)
corporate income tax rate = 35.0% (most overall)

trailed France (0.08), Malawi (0.08), Austria (0.09), and Belgium (0.09), and others, ending with Algeria (0.86).

1 out of the other 31 nations was ruled out due to missing, unknown, or not-applicable values for corporate income tax productivity, i.e., Angola.

Let’s interpret this. First, it says that the U.S. has the highest corporate income tax (35%) in the world. However, this high rate leads to a low revenue outcome, as indicated by the low 0.07 productivity score.  The Collecting Taxes Database calculates this corporate-income-tax productivity by “dividing the ratio of total corporate income tax revenues to GDP by the general corporate income tax rate.”

Not only is productivity low, but it’s the lowest of the 32 nations that collect a significant share (at least 9.3%) of personal income in relation to GDP (gross domestic product). It’s lower than France, Malawi, and others.  Here’s a plot:

U.S. corporate income tax productivityA separate insight reveals that the U.S. has the lowest corporate-income-tax productivity of the 17 nations with an agriculture sector as a percentage of GDP of at most 1.6% (the U.S. is at 1.2%).

Now let’s click on What’s best in class? to see what the U.S. could aspire to, as shown by nations that are similar, i.e., whose overall values in the database are most similar to the U.S.  It turns out that Hong Kong and South Korea do best among the 20 countries most like the U.S.

Hong Kong has the highest corporate income tax productivity (0.32) among the 20 nations most similar to USA (with 0.07) that likewise have a high-income economy.

Next with 0.18 is South Korea.

USA is 35th best among the 40 nations with applicable values and that have a high-income economy, which range from a worst of 0.02 (Bahrain) to a best of 0.99 (Qatar), with an average of 0.20 and standard deviation of 0.21.

Among all 164 nations with applicable values, the overall average is 0.15 and standard deviation is 0.16. Best is Qatar, with 0.99.

As is typical of a benchmarking engine, we can leave it to human experts – economists and political leaders in this case – to figure out whether the U.S. has a tax problem, what’s causing it, what are possible solutions, and which solution is best.  Our aim has been to provide this Taxes Benchmarking Engine as a public service and as a showcase of what automated benchmarking can do, as was done earlier for college financials, hospitals, and nursing homes.

Benchmarking need not be taxing. Simply enter any country at taxes.onlyboth.com and see how it’s doing, where it could improve, what’s trending, and what’s best in class.

Raul Valdes-Perez

 

 

Benchmarking 15,665 Nursing Homes

Today OnlyBoth launches as a public service what is likely the largest benchmarking analysis ever conducted, as measured in terms of readable language output.

The U.S. has about 1.4 million residents of nursing homes and 15,665 Medicare or Medicaid certified nursing homes. The federal government, through its regulatory powers and reimbursement function, collects performance data on all of these nursing homes, which cry out for comparison in order to understand how each is doing and where each falls short, compared to all peers or their subsets, without bias.

CambridgeMA_Cambridge_Home_for_the_Aged_and_Infirm (1)

We downloaded data from the federal Nursing Home Compare website and spent a couple of days consolidating the information and configuring our Benchmarking Engine, then pushed a button (figuratively!) and waited just a day and a half. The output consists of 642,192 insights, totaling more than two Encyclopaedia Britannicas in terms of English words contained in grammatical, to-the-point sentences and paragraphs. Enter a nursing home and browse the insights at http://nursing.onlyboth.com.

What did the benchmarking engine find?  At one extreme, the engine found the most things to say about Signature Healthcare at Saint Francis in Memphis, TN although most of these were not complimentary. There is clearly room for improvement there.

We often say that a massive analysis, as only an automated benchmarking engine can do, can find specific areas where even the best can improve. We discussed such a case – Stanford University Hospital – while launching our earlier hospitals benchmarking engine.

Let’s revisit sunny California. The US News & World Report lists Edgemoor Hospital in Santee, CA as the top nursing home in California, due to its top five-star rating in all the major categories. Could even Edgemoor improve? The engine reveals several areas for improvement, this one for example:

Edgemoor Hospital in Santee, CA has the most facility-reported incidents (9) of all the 200 nursing homes that have the top rating in each of overall, health inspection, quality measures, staffing, and registered-nurse staffing (5 total). Those 9 represent 18.8% of the total across the 200 nursing homes, whose average is 0.2.

Now let’s consider Bridgepoint Sub-Acute and Rehab Capitol Hill in Washington, DC which is the nursing home nearest Capitol Hill, where Congress meets. This facility does especially well in bladder and bowel control among low-risk, long-stay residents, as compared to other for-profit facilities that locate within a hospital. The engine found six specific areas for keen improvement, the first of which is this:

Bridgepoint Sub-Acute and Rehab Capitol Hill in Washington, DC has the 3rd-most high-risk long-stay residents with pressure ulcers (24.8%) among the 1,982 Mid-Atlantic nursing homes. That 24.8% compares to an average of 6.4% across the 1,982 nursing homes.

Another noteworthy insight it that the facility has “the most severe deficiencies on the health survey (5) of the 718 nursing homes that have the top rating in each of quality measures, staffing, and registered-nurse staffing.

This application is launched as a public service as well as a technology showcase, benchmarking well over triple the number of entities that were benchmarked in our previous largest application, to 4,803 hospitals. The relevance to business is this:  the advent of cloud services, internet of things, and other means for collecting customer performance data will enable the automated benchmarking of business processes, generating tremendous economic value by benchmarking 10,000 entities with the same amount of work as benchmarking 10 entities.

Our goal is universal betterment by providing persuasive, motivating insights that pinpoint what is going well and where improvement is sorely needed and is achievable. Benchmarking Engines will do for business benchmarking what Search Engines did for information seeking, assigning to computers what they do better – massive comparisons – and to people what they do better – evaluating and following up, as appropriate – on benchmarking insights.

Raul Valdes-Perez

 

Relaunch of Hospitals Benchmarking Engine

OnlyBoth benchmarks U.S. hospitals as both a public service and as a visible demonstration of the power of an automated Benchmarking Engine. This enables hospital stakeholders to instantly discover in perfect English how they’re doing, not compared to absolute standards or arbitrary peers, but to all peers and groups.

We launched our first version this summer. Today we relaunched our hospitals benchmarking engine based on fresh data and technical advances:

  1. updated Hospital Compare dataset from Medicare.gov, now on 4,803 hospitals
  2. new hospital attributes relating to hospital performance and geography
  3. better expression of key types of insights
  4. improved heuristics leading to more insights per hospital
  5. addition of data on hospital networks, enabling intra-network comparisons

1.  We have refreshed the data in the hospital application based on a late-September data release at the Hospital Compare data download page. This new release also contains new hospital attributes, as discussed below.

2.  Since geography is an important determinant of peer groups, we’ve added attributes that enabling grouping East Coast, Southern, and Western states. We’ve also added two new attributes from the updated Hospital Compare data that relate to deaths or unplanned readmission due to coronary artery bypass grafting (CABG) surgery, and five new attributes that express hospital-readmission ratios for various afflictions.

3.  A key type of insight expresses how entities that are within an elite peer group fall short along some key dimension. For example, our recent Harvard Business Review article, which explains why benchmarking is done wrong and how to do it right, gives this example of Stanford Hospital:

None of the other 344 hospitals with as many patients who reported YES, they would definitely recommend the hospital (85%) as Stanford Hospital in Stanford, CA also has as few patients who reported that the area around their room was always quiet at night (41%). That is, among those 344 hospitals, it has the fewest patients who reported that the area around their room was always quiet at night.

As the saying goes, this was too clever by half. After considering feedback from surveying users, this insight now appears, with the refreshed data, like this:

Stanford Hospital in Stanford, CA has the fewest patients who reported that the area around their room was always quiet at night (40%) among the 811 hospitals with at least 80% of patients who reported YES, they would definitely recommend the hospital (Stanford Hospital is at 84%). That 40% compares to an average of 69.4% and standard deviation of 10.7% across the 811 hospitals.

Of course, this improvement affects thousands of insights, and millions in the future.

4.  We’ve improved the heuristics that enable finding valuable needles within the huge haystack that results from taking multiple slices out of a dataset of half a million hospital attribute values. Our new Hospitals Benchmarking contains 522,142 insights, or around 109 insights per hospital, compared to the previous 101 per hospital. The key benchmarking question – Where can this hospital improve? – has seen a 4% increase in answers per hospital.

5.  For a hospital-network executive, it’s valuable to benchmark individual hospitals against others in the network, especially because knowledge transfer of good practices can happen more easily when two entities have the same owner. We’ve added a parent attribute that for now includes four networks:  UPMC, Kaiser Foundation, Texas Health Resources, and NYC Health and Hospitals. We’ll add other hospital networks over time.

We expect that this hospitals application, and the diffusion of benchmarking engines in general, will further the goal of enabling universal betterment through data-driven comparison with peers, greatly simplified in terms of human work, but greatly expanded in terms of action-provoking insights.

Raul Valdes-Perez

Avoiding Tunnel Vision in Peer Comparisons

Comparing yourself to peers – also known as benchmarking – lets you understand how you’re doing, identify performance gaps and opportunities to improve, and highlight peer achievements that you could emulate, or your own achievements to be celebrated. As long as data is available, peer comparison can potentially accomplish all of these. The opportunities for peer comparison are greatly increasing due to cloud and other services that generate data as a by-product of serving customers.

The problem is that peer comparison as generally practiced suffers from Tunnel Vision and so misses a lot, to everyone’s detriment. To understand why, let’s first consider an analogy to search engines.

An information seeker, before there were search engines, might have gone to consult a librarian on, say, computers and heard “That’s technology, so look in the Technology books section, over in the back, by the right.” But there’s plenty of material on computers that’s catalogued elsewhere, e.g., automation’s impact on employment and job training, the philosophical question of whether computers in principle could do everything that people do, cognitive modeling of human reasoning using computers, computer history, and so on. The point is that looking only in the Technology section is an example of Tunnel Vision, or maybe bookshelf vision. Search engines changed that.

So where’s the Tunnel Vision in peer comparisons? It’s almost universal practice that the benchmarker chooses one or two organizational goals, then picks a few key metrics (key performance indicators) relevant to those goals, and finally selects several peer groups from a limited set. The outputs are then the mean, median, distribution, or high-percentile values for those peer groups on those metrics. The conclusion is that the organization may or may not have a problem, which may or may not be addressable. The flaw in all this is that organizations have many goals and subgoals, and many metrics that could reveal performance gaps, especially if a very large set of peer groups could also be explored. But our human inability to explore many paths in parallel imposes this Tunnel Vision, for the same reason that pre-search-engines information seekers went looking in one or two sections of the library.

As an example of peer-group selection, suppose you wanted to compare the U.S. against other nations. What would be the right peer group?  Here are some that make sense: democracies; the Anglosphere; constitutional republics; large countries; developed countries; OECD or NATO members; the western hemisphere; non-tropical countries; largely monolingual countries; business-friendly economies; and even baseball-playing nations. Moreover, peer groups could be formed dynamically, e.g., countries at least as big as the U.S. in population or territory. And what would be the right metrics? The mind boggles at the number of interesting possibilities, all of which may have available data. As already pointed out, standard practice is to first specify an overarching goal, which then drives the choice of metrics and peer group. (Some web examples of standard benchmarking outputs are herethere, and elsewhere.) But what if the goal is to understand broadly how you’re doing and where you could improve? Tunnel Vision is caused by over-specific goals, limited metrics, and biased peer groups, all part of standard benchmarking practice which is made obsolete in the face of exploring all interesting metrics and potential peer groups that could lead to operational improvements.

Let’s run some numbers to show the scope of Tunnel Vision. Suppose there are 10 attributes with yes/no values and another 10 attributes that can take on any of five different values, plus one attribute that can take on 50 values, e.g., a U.S. state. There are theoretically 210 x 510 x 50 = 500 billion peer groups. Even if we include only peer groups whose attribute values match those of the specific individual to be benchmarked, the number would be 221 = 2.1 million peer groups.

Let’s move from the abstract to the concrete. Here are two (accurate) peer comparisons that are arguably insightful:

1. St Anthony Community Hospital in Warwick, NY has the lowest average time patients spent in the emergency department before they were seen by a healthcare professional of all the church-owned hospitals in the mid-Atlantic.

2. Macalester College in Saint Paul, MN has the highest total student-service expenses of any big-city private college that doesn’t offer graduate degrees.

Note the peer groups: (1) church-owned; mid-Atlantic; and (2) big-city; private; doesn’t offer graduate degrees. Now consider an imaginary peer comparison that uses four attributes to form a noteworthy peer group:

3. Cumulus Inc. is the most profitable of all the B2B, cloud-based, venture-backed companies that have at least 200 customers.

We see that considering more peer groups leads to uncovering more valuable benchmarking insights. Since the number of possible peer groups is vast, and benchmarking has seen little automation, this means that Tunnel Vision is necessarily widespread.

But the Tunnel Vision gets much worse! Peer groups can be formed, not just by picking non-numeric (aka symbolic) attributes, but also by dynamically determining numeric thresholds. Here’s a revealing (and true) insight that contains a dynamically-formed peer group:

None of the other 344 hospitals with as many patients who reported YES, they would definitely recommend the hospital (85%) as Stanford Hospital in Stanford, CA also has as few patients who reported that the area around their room was always quiet at night (41%).

That is, among those 344 hospitals, it has the fewest patients who reported that the area around their room was always quiet at night.

Stanford Hospital

This is clearly a provocative insight. One can imagine a hospital CEO reacting in one of these ways:

  1. We’re profitable, prestigious, and have great weather. What’s a little nocturnal noise?
  2. There’s been night-time construction next door for the last year, and it’s almost done, so the problem will solve itself.
  3. I can’t think of any reason why we should be at the bottom of this elite peer group. I’ll forward this paragraph to our chief of operations to investigate and report back what may be happening.

This peer-comparison insight wouldn’t be found by today’s conventional benchmarking methods. Instead, what may be found is along these lines: The average value for this quantity among 309 California hospitals with known values is 51.5% with a standard deviation of 9.5%, so Stanford Hospital is about 1 standard deviation below average. The reader can judge which of the two insights is the more action-provoking, not just for the single individual in charge, but for the entire team that needs to be roused to act on and address performance gaps.

So far, we’ve used some math to highlight the Tunnel Vision problem and shown specific examples, real or fictitious, of what is being missed. As our last step, let’s report the results of actual software experiments.

The website hospitals.onlyboth.com showcases the results of applying an automated benchmarking engine to data on 4,813 U.S. hospitals described by 94 attributes, mostly downloaded from the Hospital Compare website at Medicare.gov. A combinatorial exploration of peer comparisons among the 4,813 hospitals turns up 98,296 benchmarking insights that survive the software’s quality, noteworthiness, and anti-redundancy filters, or about 20 per hospital. In this hospitals experiment, insights were required to place a hospital in the top or bottom ten within a peer group of sufficient size.

There appear 522 different peer groups that are formed by combining the hospital dataset’s 24 non-numeric attributes in various ways. As noted above, the number of peer groups is much, much larger if one counts, not the attributes used, but the diverse ways to combine attribute values, e.g., the attribute “state” can either be used or not, so there are two alternatives there, but the number of state values is 50 (or more, including non-state territories), implying many more alternatives. The number of peer groups becomes still larger when accounting for dynamically-formed peer groups based on numeric thresholds.

Of course, the engine explored more peer groups than appear in the end results, which are those found to be large and noteworthy enough to bring to human attention. Also, each peer group appears in many insights by combining them with the available metrics. On average, each of the 522 peer groups enables over 900 individual hospital insights, by further combining each peer group and metric with different hospitals.

Summarizing, Tunnel Vision in peer comparisons, or benchmarking for understanding and improvement, is widespread but misses a vast number of noteworthy and action-provoking insights that could help improve organizational performance. Without automation, there aren’t enough people and time in the world to explore what’s outside the Tunnel, select the best insights, and bring them to human attention. Software automation is the way forward.

Raul Valdes-Perez

Benchmarking Financials

Today we have launched a second public showcase application of our Benchmarking Engine, this time to mostly Department of Education IPEDS data on U.S. post-secondary educational institutions, or private colleges for short, that follow the FASB accounting standard, ranging from Harvard to the Belle Academy of Cosmetology. The financials data is from FY 2013, the latest available from IPEDS.

The 1,889 private colleges are described with 151 mostly-financial attributes, of which 101 are dollar amounts (investment, spending, debt, liability, etc. and their sub-categories) and 11 are financial ratios, augmented by some college rankings and profile and type attributes.

Given its emphasis on internal financial metrics, this benchmarking application addresses the core benchmarking questions from an institutional viewpoint, not from a student or faculty point of view. Some value judgments were made, for example that less debt is better than more debt, but of course in some circumstances more debt can be good, such as when the interest is low and the return on the debt is high.

Here is an example insight on how Columbia University can improve:

None of the other 1,631 private colleges with as few total liabilities ($3.028B) as Columbia also has as much debt related to property, plant, and equipment ($1.479B).  That is, it has the most debt related to property, plant, and equipment among those 1,631 private colleges.

Columbia-liabilities-debt-OnlyBoth`

On a related note closer to home, here’s a rather favorable insight about Carnegie Mellon:

In the Mid Atlantic with its 434 private colleges, only Carnegie Mellon both spends as much on research ($284.3M) and has as few research expenses – operation and maintenance of plant ($9.958M).

Clearly, software, psychology, and decision science don’t cost much! Over on the west coast, Stanford is seen to have rich, forthcoming donors:

Stanford has the most private gifts ($694.5M) among all 1,889 private colleges. Those $694.5M represent 4.3% of the total among all 1,889 private colleges, whose average is $8.632M.

As a final example, let’s move southwest and to smaller colleges, for example Austin College in Texas:

In the Southwest with its 101 private colleges, only Austin College has both as much construction in progress ($32.95M) and as few total assets ($251.5M).

Build it and they will come!

Accounting statements and financials in general are an especially promising application of Benchmarking Engines, because financial metrics follow established standards – FASB in this case – and relate to critical organizational performance.

Enter your own private college here.

Raul Valdes-Perez

Ranking SaaS Vendors by their Benchmarking Activity

As I’ve argued elsewhere, business benchmarking has been held back by the problem of data availability, as well as by the lack of software automation, despite its worthy goal of enabling continuous organizational improvement.

benchmarking-saas

Most SaaS vendors are uniquely placed to sidestep the availability problem, because SaaS generates rich data as a byproduct of serving its customers. This data can be captured by vendors and put to good use, for the benefit of those same customers, via benchmarking. The exceptions tend to be utility-like SaaS, whose customers only care whether the service is on or off, or vendors who have little visibility into how customers perform the business process that their services support.

So how well are SaaS vendors exploiting this emerging opportunity?  To find out, we analyzed the benchmarking activity of the Montclare SaaS 250 – the 250 “most successful SaaS companies in the world” according to Montclare, self-described as the “Industry’s Premier Research and Consulting Firm Focused on SaaS.”  For each vendor, we measured benchmarking focus by dividing the number of its website’s hits on the query benchmarking by its total number of webpages, both as reported by the Google API.  Below are the ranked results, which range from 0% to 94%.  We opted to leave untouched a few anomalous results due to hits from hosted content, e.g., at Google and at LinkedIn.

Overall, there’s lots of activity. SaaS is a busy playing field for benchmarking. Unanswered here is whether that activity reflects actual vendor benchmarking services or something else. Also not addressed is whether vendor benchmarking is powered by automation.

Interestingly, SaaS pioneer Salesforce.com comes in below at #221. Benchmarking on its website tends toward blog topics or partner activity, not Salesforce’s own offerings.

Raul Valdes-Perez

  1. Veracode 94.08%
  2. Tangoe 87.13%
  3. IQNavigator 80.62%
  4. Meltwater Group 70.14%
  5. athenahealth 64.66%
  6. SciQuest 53.85%
  7. MediData Solutions 53.48%
  8. ON24 26.37%
  9. ComScore 25.95%
  10. Intel 25.69%
  11. Apptio 25.3%
  12. ServiceSource 20.42%
  13. Symantec 16.41%
  14. Deltek 15.14%
  15. GTNexus 14.95%
  16. Xactly 14.01%
  17. Blackbaud 13.9%
  18. Jobvite 13.21%
  19. Trend Micro 12.81%
  20. Synygy 12.74%
  21. Coupa Software 12.52%
  22. AlphaBricks 12.5%
  23. Domo 11.85%
  24. ADP 11.54%
  25. Beckon 10.78%
  26. Marin Software 10.45%
  27. Intacct 10.0%
  28. Act-On Software 9.51%
  29. Peoplefluent 9.03%
  30. E2open 8.62%
  31. CallidusCloud 7.86%
  32. Amber Road 7.8%
  33. Fleetmatics 7.64%
  34. Demandware 7.3%
  35. Instart Logic 7.22%
  36. Reval 7.22%
  37. Wolters Kluwer 7.01%
  38. Globoforce 6.83%
  39. 3D Systems 6.82%
  40. Marketo 6.72%
  41. eGain 6.5%
  42. RingLead 6.5%
  43. Achievers 6.14%
  44. FICO 6.13%
  45. CRMnext 5.82%
  46. Veeva Systems 5.79%
  47. KnowledgeTree 5.73%
  48. Basware 5.61%
  49. Deem 5.52%
  50. Cornerstone OnDemand 5.4%
  51. Bullhorn 5.34%
  52. LiveOps 5.19%
  53. Tidemark 5.03%
  54. Hubspot 4.91%
  55. Lattice Engines 4.9%
  56. MindTree 4.87%
  57. Telogis 4.87%
  58. Plex 4.69%
  59. InsideView 4.48%
  60. Cloudpay 4.46%
  61. Monitise 4.44%
  62. Nice Systems 4.27%
  63. Birst 4.25%
  64. Payscale 4.24%
  65. inContact 4.23%
  66. NewVoiceMedia 4.19%
  67. Anaplan 4.16%
  68. PROS Holdings 4.08%
  69. Zuora 4.01%
  70. New Relic 3.99%
  71. Mimecast 3.97%
  72. Qualys 3.88%
  73. GoodData 3.86%
  74. FinancialForce.com 3.8%
  75. Insidesales.com 3.75%
  76. Actian 3.73%
  77. Cerner Corporation 3.66%
  78. CSC 3.66%
  79. Healthstream 3.66%
  80. MYOB 3.64%
  81. Adaptive Insights 3.6%
  82. Gainsight 3.6%
  83. ClearSlide 3.55%
  84. Verint Systems 3.52%
  85. Oracle 3.45%
  86. Lumesse 3.38%
  87. Ultimate Software 3.33%
  88. AppDynamics 3.26%
  89. Kronos 3.24%
  90. Ramco Systems 3.2%
  91. Halogen Software 3.18%
  92. RightScale 3.13%
  93. Descartes Systems 3.12%
  94. Workday 3.09%
  95. Fujitsu 2.98%
  96. NetSuite 2.93%
  97. Ceridian 2.89%
  98. QuestBack 2.88%
  99. Ericsson 2.84%
  100. Dassault Systèmes 2.8%
  101. Rocket Fuel 2.79%
  102. Nuance Communications 2.7%
  103. DealerTrack 2.66%
  104. Selectica 2.6%
  105. Survey Monkey 2.57%
  106. AdRoll 2.54%
  107. Opower 2.52%
  108. Saba 2.52%
  109. iCIMS 2.5%
  110. Intuit 2.48%
  111. Rally Software 2.44%
  112. Blackline Systems 2.38%
  113. Host Analytics 2.37%
  114. eVariant 2.36%
  115. Covisint 2.34%
  116. Apttus 2.32%
  117. Proofpoint 2.3%
  118. VMware 2.3%
  119. cVent 2.25%
  120. EMC Corporation 2.24%
  121. Epicor 2.24%
  122. ServiceMax 2.23%
  123. CashStar 2.09%
  124. SAS Institute 2.08%
  125. SugarCRM 2.08%
  126. Infor 2.03%
  127. OpenText 2.0%
  128. SPS Commerce 1.95%
  129. WebTrends 1.94%
  130. Akamai Technologies 1.93%
  131. DATEV eG 1.89%
  132. FPX 1.82%
  133. Hitachi 1.81%
  134. Huddle 1.81%
  135. Threatmetrix 1.8%
  136. BroadVision 1.79%
  137. Kyriba 1.79%
  138. Support.com 1.71%
  139. Castlight Health 1.68%
  140. Atlassian 1.65%
  141. Workforce Software 1.65%
  142. Bottomline Technologies 1.6%
  143. Brightcove 1.6%
  144. Retail Solutions 1.57%
  145. 2U 1.51%
  146. Five9 1.5%
  147. LinkedIn 1.41%
  148. Hyland Software 1.4%
  149. Workfront 1.39%
  150. Informatica 1.34%
  151. Mulesoft 1.33%
  152. SilkRoad 1.31%
  153. IBM 1.22%
  154. Mix Telematics 1.22%
  155. BenefitFocus 1.18%
  156. Blue Jeans Network 1.18%
  157. MicroStrategy 1.11%
  158. Trustwave 1.1%
  159. Google 1.09%
  160. TIBCO Software 1.09%
  161. Xero 1.09%
  162. Blackboard 1.08%
  163. Silver Spring Networks 1.08%
  164. Zendesk 1.04%
  165. AeroHive Networks 1.01%
  166. Alfresco 1.01%
  167. Clarizen 1.01%
  168. GitHub 0.98%
  169. Jive Software 0.98%
  170. Paychex 0.98%
  171. ASG Software 0.97%
  172. Cision 0.96%
  173. Freshbooks 0.95%
  174. Logik 0.94%
  175. Practice Fusion 0.94%
  176. Autodesk 0.92%
  177. SolarWinds 0.89%
  178. Pegasystems 0.88%
  179. Digital River 0.87%
  180. Siemens 0.86%
  181. Constant Contact 0.84%
  182. LivePerson 0.84%
  183. Synchronoss 0.81%
  184. Dell 0.78%
  185. Citrix 0.77%
  186. Opera Software 0.76%
  187. Hewlett-Packard 0.75%
  188. Tableau Software 0.75%
  189. Avangate 0.66%
  190. Paylocity 0.65%
  191. Mindjet 0.64%
  192. Cisco Systems 0.63%
  193. Aria Systems 0.62%
  194. Hightail 0.62%
  195. Glassdoor 0.6%
  196. Nakisa 0.6%
  197. Okta 0.6%
  198. Deluxe Corp 0.57%
  199. ChannelAdvisor 0.56%
  200. FrontRange 0.54%
  201. CA Technologies 0.53%
  202. Daptiv 0.51%
  203. SAP 0.51%
  204. ServiceNow 0.51%
  205. BMC Software 0.5%
  206. IntraLinks 0.5%
  207. Splunk 0.49%
  208. Finnet Limited 0.47%
  209. Bill.com 0.46%
  210. Limelight Networks 0.46%
  211. Box 0.44%
  212. Zoho 0.42%
  213. Adobe Systems 0.41%
  214. CollabNet 0.41%
  215. SugarSync 0.41%
  216. MobileIron 0.39%
  217. Lithium Technologies 0.32%
  218. RingCentral 0.32%
  219. Twilio 0.32%
  220. Elance/oDesk 0.3%
  221. Salesforce.com 0.29%
  222. Zscaler 0.28%
  223. Magic Software Enterprises 0.27%
  224. Microsoft 0.25%
  225. Jitterbit 0.23%
  226. Parallels 0.23%
  227. Bazaarvoice 0.17%
  228. Basecamp 0.16%
  229. Active Network 0.15%
  230. M-Files 0.15%
  231. DocuSign 0.14%
  232. LogMeIn 0.14%
  233. DropBox 0.11%
  234. Rocket Lawyer 0.11%
  235. Doximity 0.08%
  236. Ping Identity 0.06%
  237. BorderFree 0.04%
  238. Evernote 0.04%
  239. TOTVS 0.04%
  240. Exact Holding NV 0.03%
  241. Arena Solutions 0.0%
  242. Carbonite 0.0%
  243. Cybozu 0.0%
  244. Eventbrite 0.0%
  245. j2 Global 0.0%
  246. KDS 0.0%
  247. META4 0.0%
  248. Paycom 0.0%
  249. Xtenza Solutions 0.0%
  250. Vend 0.0%