Benchmarking 15,665 Nursing Homes

Today OnlyBoth launches as a public service what is likely the largest benchmarking analysis ever conducted, as measured in terms of readable language output.

The U.S. has about 1.4 million residents of nursing homes and 15,665 Medicare or Medicaid certified nursing homes. The federal government, through its regulatory powers and reimbursement function, collects performance data on all of these nursing homes, which cry out for comparison in order to understand how each is doing and where each falls short, compared to all peers or their subsets, without bias.

CambridgeMA_Cambridge_Home_for_the_Aged_and_Infirm (1)

We downloaded data from the federal Nursing Home Compare website and spent a couple of days consolidating the information and configuring our Benchmarking Engine, then pushed a button (figuratively!) and waited just a day and a half. The output consists of 642,192 insights, totaling more than two Encyclopaedia Britannicas in terms of English words contained in grammatical, to-the-point sentences and paragraphs. Enter a nursing home and browse the insights at

What did the benchmarking engine find?  At one extreme, the engine found the most things to say about Signature Healthcare at Saint Francis in Memphis, TN although most of these were not complimentary. There is clearly room for improvement there.

We often say that a massive analysis, as only an automated benchmarking engine can do, can find specific areas where even the best can improve. We discussed such a case – Stanford University Hospital – while launching our earlier hospitals benchmarking engine.

Let’s revisit sunny California. The US News & World Report lists Edgemoor Hospital in Santee, CA as the top nursing home in California, due to its top five-star rating in all the major categories. Could even Edgemoor improve? The engine reveals several areas for improvement, this one for example:

Edgemoor Hospital in Santee, CA has the most facility-reported incidents (9) of all the 200 nursing homes that have the top rating in each of overall, health inspection, quality measures, staffing, and registered-nurse staffing (5 total). Those 9 represent 18.8% of the total across the 200 nursing homes, whose average is 0.2.

Now let’s consider Bridgepoint Sub-Acute and Rehab Capitol Hill in Washington, DC which is the nursing home nearest Capitol Hill, where Congress meets. This facility does especially well in bladder and bowel control among low-risk, long-stay residents, as compared to other for-profit facilities that locate within a hospital. The engine found six specific areas for keen improvement, the first of which is this:

Bridgepoint Sub-Acute and Rehab Capitol Hill in Washington, DC has the 3rd-most high-risk long-stay residents with pressure ulcers (24.8%) among the 1,982 Mid-Atlantic nursing homes. That 24.8% compares to an average of 6.4% across the 1,982 nursing homes.

Another noteworthy insight it that the facility has “the most severe deficiencies on the health survey (5) of the 718 nursing homes that have the top rating in each of quality measures, staffing, and registered-nurse staffing.

This application is launched as a public service as well as a technology showcase, benchmarking well over triple the number of entities that were benchmarked in our previous largest application, to 4,803 hospitals. The relevance to business is this:  the advent of cloud services, internet of things, and other means for collecting customer performance data will enable the automated benchmarking of business processes, generating tremendous economic value by benchmarking 10,000 entities with the same amount of work as benchmarking 10 entities.

Our goal is universal betterment by providing persuasive, motivating insights that pinpoint what is going well and where improvement is sorely needed and is achievable. Benchmarking Engines will do for business benchmarking what Search Engines did for information seeking, assigning to computers what they do better – massive comparisons – and to people what they do better – evaluating and following up, as appropriate – on benchmarking insights.

Raul Valdes-Perez


Relaunch of Hospitals Benchmarking Engine

OnlyBoth benchmarks U.S. hospitals as both a public service and as a visible demonstration of the power of an automated Benchmarking Engine. This enables hospital stakeholders to instantly discover in perfect English how they’re doing, not compared to absolute standards or arbitrary peers, but to all peers and groups.

We launched our first version this summer. Today we relaunched our hospitals benchmarking engine based on fresh data and technical advances:

  1. updated Hospital Compare dataset from, now on 4,803 hospitals
  2. new hospital attributes relating to hospital performance and geography
  3. better expression of key types of insights
  4. improved heuristics leading to more insights per hospital
  5. addition of data on hospital networks, enabling intra-network comparisons

1.  We have refreshed the data in the hospital application based on a late-September data release at the Hospital Compare data download page. This new release also contains new hospital attributes, as discussed below.

2.  Since geography is an important determinant of peer groups, we’ve added attributes that enabling grouping East Coast, Southern, and Western states. We’ve also added two new attributes from the updated Hospital Compare data that relate to deaths or unplanned readmission due to coronary artery bypass grafting (CABG) surgery, and five new attributes that express hospital-readmission ratios for various afflictions.

3.  A key type of insight expresses how entities that are within an elite peer group fall short along some key dimension. For example, our recent Harvard Business Review article, which explains why benchmarking is done wrong and how to do it right, gives this example of Stanford Hospital:

None of the other 344 hospitals with as many patients who reported YES, they would definitely recommend the hospital (85%) as Stanford Hospital in Stanford, CA also has as few patients who reported that the area around their room was always quiet at night (41%). That is, among those 344 hospitals, it has the fewest patients who reported that the area around their room was always quiet at night.

As the saying goes, this was too clever by half. After considering feedback from surveying users, this insight now appears, with the refreshed data, like this:

Stanford Hospital in Stanford, CA has the fewest patients who reported that the area around their room was always quiet at night (40%) among the 811 hospitals with at least 80% of patients who reported YES, they would definitely recommend the hospital (Stanford Hospital is at 84%). That 40% compares to an average of 69.4% and standard deviation of 10.7% across the 811 hospitals.

Of course, this improvement affects thousands of insights, and millions in the future.

4.  We’ve improved the heuristics that enable finding valuable needles within the huge haystack that results from taking multiple slices out of a dataset of half a million hospital attribute values. Our new Hospitals Benchmarking contains 522,142 insights, or around 109 insights per hospital, compared to the previous 101 per hospital. The key benchmarking question – Where can this hospital improve? – has seen a 4% increase in answers per hospital.

5.  For a hospital-network executive, it’s valuable to benchmark individual hospitals against others in the network, especially because knowledge transfer of good practices can happen more easily when two entities have the same owner. We’ve added a parent attribute that for now includes four networks:  UPMC, Kaiser Foundation, Texas Health Resources, and NYC Health and Hospitals. We’ll add other hospital networks over time.

We expect that this hospitals application, and the diffusion of benchmarking engines in general, will further the goal of enabling universal betterment through data-driven comparison with peers, greatly simplified in terms of human work, but greatly expanded in terms of action-provoking insights.

Raul Valdes-Perez

Avoiding Tunnel Vision in Peer Comparisons

Comparing yourself to peers – also known as benchmarking – lets you understand how you’re doing, identify performance gaps and opportunities to improve, and highlight peer achievements that you could emulate, or your own achievements to be celebrated. As long as data is available, peer comparison can potentially accomplish all of these. The opportunities for peer comparison are greatly increasing due to cloud and other services that generate data as a by-product of serving customers.

The problem is that peer comparison as generally practiced suffers from Tunnel Vision and so misses a lot, to everyone’s detriment. To understand why, let’s first consider an analogy to search engines.

An information seeker, before there were search engines, might have gone to consult a librarian on, say, computers and heard “That’s technology, so look in the Technology books section, over in the back, by the right.” But there’s plenty of material on computers that’s catalogued elsewhere, e.g., automation’s impact on employment and job training, the philosophical question of whether computers in principle could do everything that people do, cognitive modeling of human reasoning using computers, computer history, and so on. The point is that looking only in the Technology section is an example of Tunnel Vision, or maybe bookshelf vision. Search engines changed that.

So where’s the Tunnel Vision in peer comparisons? It’s almost universal practice that the benchmarker chooses one or two organizational goals, then picks a few key metrics (key performance indicators) relevant to those goals, and finally selects several peer groups from a limited set. The outputs are then the mean, median, distribution, or high-percentile values for those peer groups on those metrics. The conclusion is that the organization may or may not have a problem, which may or may not be addressable. The flaw in all this is that organizations have many goals and subgoals, and many metrics that could reveal performance gaps, especially if a very large set of peer groups could also be explored. But our human inability to explore many paths in parallel imposes this Tunnel Vision, for the same reason that pre-search-engines information seekers went looking in one or two sections of the library.

As an example of peer-group selection, suppose you wanted to compare the U.S. against other nations. What would be the right peer group?  Here are some that make sense: democracies; the Anglosphere; constitutional republics; large countries; developed countries; OECD or NATO members; the western hemisphere; non-tropical countries; largely monolingual countries; business-friendly economies; and even baseball-playing nations. Moreover, peer groups could be formed dynamically, e.g., countries at least as big as the U.S. in population or territory. And what would be the right metrics? The mind boggles at the number of interesting possibilities, all of which may have available data. As already pointed out, standard practice is to first specify an overarching goal, which then drives the choice of metrics and peer group. (Some web examples of standard benchmarking outputs are herethere, and elsewhere.) But what if the goal is to understand broadly how you’re doing and where you could improve? Tunnel Vision is caused by over-specific goals, limited metrics, and biased peer groups, all part of standard benchmarking practice which is made obsolete in the face of exploring all interesting metrics and potential peer groups that could lead to operational improvements.

Let’s run some numbers to show the scope of Tunnel Vision. Suppose there are 10 attributes with yes/no values and another 10 attributes that can take on any of five different values, plus one attribute that can take on 50 values, e.g., a U.S. state. There are theoretically 210 x 510 x 50 = 500 billion peer groups. Even if we include only peer groups whose attribute values match those of the specific individual to be benchmarked, the number would be 221 = 2.1 million peer groups.

Let’s move from the abstract to the concrete. Here are two (accurate) peer comparisons that are arguably insightful:

1. St Anthony Community Hospital in Warwick, NY has the lowest average time patients spent in the emergency department before they were seen by a healthcare professional of all the church-owned hospitals in the mid-Atlantic.

2. Macalester College in Saint Paul, MN has the highest total student-service expenses of any big-city private college that doesn’t offer graduate degrees.

Note the peer groups: (1) church-owned; mid-Atlantic; and (2) big-city; private; doesn’t offer graduate degrees. Now consider an imaginary peer comparison that uses four attributes to form a noteworthy peer group:

3. Cumulus Inc. is the most profitable of all the B2B, cloud-based, venture-backed companies that have at least 200 customers.

We see that considering more peer groups leads to uncovering more valuable benchmarking insights. Since the number of possible peer groups is vast, and benchmarking has seen little automation, this means that Tunnel Vision is necessarily widespread.

But the Tunnel Vision gets much worse! Peer groups can be formed, not just by picking non-numeric (aka symbolic) attributes, but also by dynamically determining numeric thresholds. Here’s a revealing (and true) insight that contains a dynamically-formed peer group:

None of the other 344 hospitals with as many patients who reported YES, they would definitely recommend the hospital (85%) as Stanford Hospital in Stanford, CA also has as few patients who reported that the area around their room was always quiet at night (41%).

That is, among those 344 hospitals, it has the fewest patients who reported that the area around their room was always quiet at night.

Stanford Hospital

This is clearly a provocative insight. One can imagine a hospital CEO reacting in one of these ways:

  1. We’re profitable, prestigious, and have great weather. What’s a little nocturnal noise?
  2. There’s been night-time construction next door for the last year, and it’s almost done, so the problem will solve itself.
  3. I can’t think of any reason why we should be at the bottom of this elite peer group. I’ll forward this paragraph to our chief of operations to investigate and report back what may be happening.

This peer-comparison insight wouldn’t be found by today’s conventional benchmarking methods. Instead, what may be found is along these lines: The average value for this quantity among 309 California hospitals with known values is 51.5% with a standard deviation of 9.5%, so Stanford Hospital is about 1 standard deviation below average. The reader can judge which of the two insights is the more action-provoking, not just for the single individual in charge, but for the entire team that needs to be roused to act on and address performance gaps.

So far, we’ve used some math to highlight the Tunnel Vision problem and shown specific examples, real or fictitious, of what is being missed. As our last step, let’s report the results of actual software experiments.

The website showcases the results of applying an automated benchmarking engine to data on 4,813 U.S. hospitals described by 94 attributes, mostly downloaded from the Hospital Compare website at A combinatorial exploration of peer comparisons among the 4,813 hospitals turns up 98,296 benchmarking insights that survive the software’s quality, noteworthiness, and anti-redundancy filters, or about 20 per hospital. In this hospitals experiment, insights were required to place a hospital in the top or bottom ten within a peer group of sufficient size.

There appear 522 different peer groups that are formed by combining the hospital dataset’s 24 non-numeric attributes in various ways. As noted above, the number of peer groups is much, much larger if one counts, not the attributes used, but the diverse ways to combine attribute values, e.g., the attribute “state” can either be used or not, so there are two alternatives there, but the number of state values is 50 (or more, including non-state territories), implying many more alternatives. The number of peer groups becomes still larger when accounting for dynamically-formed peer groups based on numeric thresholds.

Of course, the engine explored more peer groups than appear in the end results, which are those found to be large and noteworthy enough to bring to human attention. Also, each peer group appears in many insights by combining them with the available metrics. On average, each of the 522 peer groups enables over 900 individual hospital insights, by further combining each peer group and metric with different hospitals.

Summarizing, Tunnel Vision in peer comparisons, or benchmarking for understanding and improvement, is widespread but misses a vast number of noteworthy and action-provoking insights that could help improve organizational performance. Without automation, there aren’t enough people and time in the world to explore what’s outside the Tunnel, select the best insights, and bring them to human attention. Software automation is the way forward.

Raul Valdes-Perez

Benchmarking Financials

Today we have launched a second public showcase application of our Benchmarking Engine, this time to mostly Department of Education IPEDS data on U.S. post-secondary educational institutions, or private colleges for short, that follow the FASB accounting standard, ranging from Harvard to the Belle Academy of Cosmetology. The financials data is from FY 2013, the latest available from IPEDS.

The 1,889 private colleges are described with 151 mostly-financial attributes, of which 101 are dollar amounts (investment, spending, debt, liability, etc. and their sub-categories) and 11 are financial ratios, augmented by some college rankings and profile and type attributes.

Given its emphasis on internal financial metrics, this benchmarking application addresses the core benchmarking questions from an institutional viewpoint, not from a student or faculty point of view. Some value judgments were made, for example that less debt is better than more debt, but of course in some circumstances more debt can be good, such as when the interest is low and the return on the debt is high.

Here is an example insight on how Columbia University can improve:

None of the other 1,631 private colleges with as few total liabilities ($3.028B) as Columbia also has as much debt related to property, plant, and equipment ($1.479B).  That is, it has the most debt related to property, plant, and equipment among those 1,631 private colleges.


On a related note closer to home, here’s a rather favorable insight about Carnegie Mellon:

In the Mid Atlantic with its 434 private colleges, only Carnegie Mellon both spends as much on research ($284.3M) and has as few research expenses – operation and maintenance of plant ($9.958M).

Clearly, software, psychology, and decision science don’t cost much! Over on the west coast, Stanford is seen to have rich, forthcoming donors:

Stanford has the most private gifts ($694.5M) among all 1,889 private colleges. Those $694.5M represent 4.3% of the total among all 1,889 private colleges, whose average is $8.632M.

As a final example, let’s move southwest and to smaller colleges, for example Austin College in Texas:

In the Southwest with its 101 private colleges, only Austin College has both as much construction in progress ($32.95M) and as few total assets ($251.5M).

Build it and they will come!

Accounting statements and financials in general are an especially promising application of Benchmarking Engines, because financial metrics follow established standards – FASB in this case – and relate to critical organizational performance.

Enter your own private college here.

Raul Valdes-Perez

Ranking SaaS Vendors by their Benchmarking Activity

As I’ve argued elsewhere, business benchmarking has been held back by the problem of data availability, as well as by the lack of software automation, despite its worthy goal of enabling continuous organizational improvement.


Most SaaS vendors are uniquely placed to sidestep the availability problem, because SaaS generates rich data as a byproduct of serving its customers. This data can be captured by vendors and put to good use, for the benefit of those same customers, via benchmarking. The exceptions tend to be utility-like SaaS, whose customers only care whether the service is on or off, or vendors who have little visibility into how customers perform the business process that their services support.

So how well are SaaS vendors exploiting this emerging opportunity?  To find out, we analyzed the benchmarking activity of the Montclare SaaS 250 – the 250 “most successful SaaS companies in the world” according to Montclare, self-described as the “Industry’s Premier Research and Consulting Firm Focused on SaaS.”  For each vendor, we measured benchmarking focus by dividing the number of its website’s hits on the query benchmarking by its total number of webpages, both as reported by the Google API.  Below are the ranked results, which range from 0% to 94%.  We opted to leave untouched a few anomalous results due to hits from hosted content, e.g., at Google and at LinkedIn.

Overall, there’s lots of activity. SaaS is a busy playing field for benchmarking. Unanswered here is whether that activity reflects actual vendor benchmarking services or something else. Also not addressed is whether vendor benchmarking is powered by automation.

Interestingly, SaaS pioneer comes in below at #221. Benchmarking on its website tends toward blog topics or partner activity, not Salesforce’s own offerings.

Raul Valdes-Perez

  1. Veracode 94.08%
  2. Tangoe 87.13%
  3. IQNavigator 80.62%
  4. Meltwater Group 70.14%
  5. athenahealth 64.66%
  6. SciQuest 53.85%
  7. MediData Solutions 53.48%
  8. ON24 26.37%
  9. ComScore 25.95%
  10. Intel 25.69%
  11. Apptio 25.3%
  12. ServiceSource 20.42%
  13. Symantec 16.41%
  14. Deltek 15.14%
  15. GTNexus 14.95%
  16. Xactly 14.01%
  17. Blackbaud 13.9%
  18. Jobvite 13.21%
  19. Trend Micro 12.81%
  20. Synygy 12.74%
  21. Coupa Software 12.52%
  22. AlphaBricks 12.5%
  23. Domo 11.85%
  24. ADP 11.54%
  25. Beckon 10.78%
  26. Marin Software 10.45%
  27. Intacct 10.0%
  28. Act-On Software 9.51%
  29. Peoplefluent 9.03%
  30. E2open 8.62%
  31. CallidusCloud 7.86%
  32. Amber Road 7.8%
  33. Fleetmatics 7.64%
  34. Demandware 7.3%
  35. Instart Logic 7.22%
  36. Reval 7.22%
  37. Wolters Kluwer 7.01%
  38. Globoforce 6.83%
  39. 3D Systems 6.82%
  40. Marketo 6.72%
  41. eGain 6.5%
  42. RingLead 6.5%
  43. Achievers 6.14%
  44. FICO 6.13%
  45. CRMnext 5.82%
  46. Veeva Systems 5.79%
  47. KnowledgeTree 5.73%
  48. Basware 5.61%
  49. Deem 5.52%
  50. Cornerstone OnDemand 5.4%
  51. Bullhorn 5.34%
  52. LiveOps 5.19%
  53. Tidemark 5.03%
  54. Hubspot 4.91%
  55. Lattice Engines 4.9%
  56. MindTree 4.87%
  57. Telogis 4.87%
  58. Plex 4.69%
  59. InsideView 4.48%
  60. Cloudpay 4.46%
  61. Monitise 4.44%
  62. Nice Systems 4.27%
  63. Birst 4.25%
  64. Payscale 4.24%
  65. inContact 4.23%
  66. NewVoiceMedia 4.19%
  67. Anaplan 4.16%
  68. PROS Holdings 4.08%
  69. Zuora 4.01%
  70. New Relic 3.99%
  71. Mimecast 3.97%
  72. Qualys 3.88%
  73. GoodData 3.86%
  74. 3.8%
  75. 3.75%
  76. Actian 3.73%
  77. Cerner Corporation 3.66%
  78. CSC 3.66%
  79. Healthstream 3.66%
  80. MYOB 3.64%
  81. Adaptive Insights 3.6%
  82. Gainsight 3.6%
  83. ClearSlide 3.55%
  84. Verint Systems 3.52%
  85. Oracle 3.45%
  86. Lumesse 3.38%
  87. Ultimate Software 3.33%
  88. AppDynamics 3.26%
  89. Kronos 3.24%
  90. Ramco Systems 3.2%
  91. Halogen Software 3.18%
  92. RightScale 3.13%
  93. Descartes Systems 3.12%
  94. Workday 3.09%
  95. Fujitsu 2.98%
  96. NetSuite 2.93%
  97. Ceridian 2.89%
  98. QuestBack 2.88%
  99. Ericsson 2.84%
  100. Dassault Systèmes 2.8%
  101. Rocket Fuel 2.79%
  102. Nuance Communications 2.7%
  103. DealerTrack 2.66%
  104. Selectica 2.6%
  105. Survey Monkey 2.57%
  106. AdRoll 2.54%
  107. Opower 2.52%
  108. Saba 2.52%
  109. iCIMS 2.5%
  110. Intuit 2.48%
  111. Rally Software 2.44%
  112. Blackline Systems 2.38%
  113. Host Analytics 2.37%
  114. eVariant 2.36%
  115. Covisint 2.34%
  116. Apttus 2.32%
  117. Proofpoint 2.3%
  118. VMware 2.3%
  119. cVent 2.25%
  120. EMC Corporation 2.24%
  121. Epicor 2.24%
  122. ServiceMax 2.23%
  123. CashStar 2.09%
  124. SAS Institute 2.08%
  125. SugarCRM 2.08%
  126. Infor 2.03%
  127. OpenText 2.0%
  128. SPS Commerce 1.95%
  129. WebTrends 1.94%
  130. Akamai Technologies 1.93%
  131. DATEV eG 1.89%
  132. FPX 1.82%
  133. Hitachi 1.81%
  134. Huddle 1.81%
  135. Threatmetrix 1.8%
  136. BroadVision 1.79%
  137. Kyriba 1.79%
  138. 1.71%
  139. Castlight Health 1.68%
  140. Atlassian 1.65%
  141. Workforce Software 1.65%
  142. Bottomline Technologies 1.6%
  143. Brightcove 1.6%
  144. Retail Solutions 1.57%
  145. 2U 1.51%
  146. Five9 1.5%
  147. LinkedIn 1.41%
  148. Hyland Software 1.4%
  149. Workfront 1.39%
  150. Informatica 1.34%
  151. Mulesoft 1.33%
  152. SilkRoad 1.31%
  153. IBM 1.22%
  154. Mix Telematics 1.22%
  155. BenefitFocus 1.18%
  156. Blue Jeans Network 1.18%
  157. MicroStrategy 1.11%
  158. Trustwave 1.1%
  159. Google 1.09%
  160. TIBCO Software 1.09%
  161. Xero 1.09%
  162. Blackboard 1.08%
  163. Silver Spring Networks 1.08%
  164. Zendesk 1.04%
  165. AeroHive Networks 1.01%
  166. Alfresco 1.01%
  167. Clarizen 1.01%
  168. GitHub 0.98%
  169. Jive Software 0.98%
  170. Paychex 0.98%
  171. ASG Software 0.97%
  172. Cision 0.96%
  173. Freshbooks 0.95%
  174. Logik 0.94%
  175. Practice Fusion 0.94%
  176. Autodesk 0.92%
  177. SolarWinds 0.89%
  178. Pegasystems 0.88%
  179. Digital River 0.87%
  180. Siemens 0.86%
  181. Constant Contact 0.84%
  182. LivePerson 0.84%
  183. Synchronoss 0.81%
  184. Dell 0.78%
  185. Citrix 0.77%
  186. Opera Software 0.76%
  187. Hewlett-Packard 0.75%
  188. Tableau Software 0.75%
  189. Avangate 0.66%
  190. Paylocity 0.65%
  191. Mindjet 0.64%
  192. Cisco Systems 0.63%
  193. Aria Systems 0.62%
  194. Hightail 0.62%
  195. Glassdoor 0.6%
  196. Nakisa 0.6%
  197. Okta 0.6%
  198. Deluxe Corp 0.57%
  199. ChannelAdvisor 0.56%
  200. FrontRange 0.54%
  201. CA Technologies 0.53%
  202. Daptiv 0.51%
  203. SAP 0.51%
  204. ServiceNow 0.51%
  205. BMC Software 0.5%
  206. IntraLinks 0.5%
  207. Splunk 0.49%
  208. Finnet Limited 0.47%
  209. 0.46%
  210. Limelight Networks 0.46%
  211. Box 0.44%
  212. Zoho 0.42%
  213. Adobe Systems 0.41%
  214. CollabNet 0.41%
  215. SugarSync 0.41%
  216. MobileIron 0.39%
  217. Lithium Technologies 0.32%
  218. RingCentral 0.32%
  219. Twilio 0.32%
  220. Elance/oDesk 0.3%
  221. 0.29%
  222. Zscaler 0.28%
  223. Magic Software Enterprises 0.27%
  224. Microsoft 0.25%
  225. Jitterbit 0.23%
  226. Parallels 0.23%
  227. Bazaarvoice 0.17%
  228. Basecamp 0.16%
  229. Active Network 0.15%
  230. M-Files 0.15%
  231. DocuSign 0.14%
  232. LogMeIn 0.14%
  233. DropBox 0.11%
  234. Rocket Lawyer 0.11%
  235. Doximity 0.08%
  236. Ping Identity 0.06%
  237. BorderFree 0.04%
  238. Evernote 0.04%
  239. TOTVS 0.04%
  240. Exact Holding NV 0.03%
  241. Arena Solutions 0.0%
  242. Carbonite 0.0%
  243. Cybozu 0.0%
  244. Eventbrite 0.0%
  245. j2 Global 0.0%
  246. KDS 0.0%
  247. META4 0.0%
  248. Paycom 0.0%
  249. Xtenza Solutions 0.0%
  250. Vend 0.0%


How Organizations Can Improve

In my discussions and readings, I’ve come across several ways that leaders try to improve organizational performance.  Let’s consider one approach that might be called “Just do it” and could be parodied as follows.  The Maximum Leader decides that twenty different metrics express the performance of an organization, and then mandates a 50% improvement in each metric.

Pretty simple.  What’s wrong with that approach?  There are several drawbacks.

The first problem is that people can’t work on too many things at once.  The same is true about organizations in the context of initiatives that involve multiple parts of the organization and so require coordination.  People and organizations both need to focus.

A second problem is that it may not be practical, or sometimes even theoretically possible, to achieve large performance increases across the board.  Each metric expresses a different operational aspect and may be subject to different, practical limits to achievable improvements.

Third, people and organizations need to be convinced and inspired to act.  Unless you can mandate prison time – or worse – on top of mandating all the improvements, people need reasons which are best articulated persuasively and linguistically, that is, as sentences.  (Mere words aren’t reasons and so aren’t best at persuading, although they can summarize and inspire, e.g., Onward and Upward!)

A conventional alternative to massive mandates is comparing oneself to other organizations in order to identify several important areas in which an organization is falling short (problem #1 – focus), other organizations achieving better results (problem #2 – practicality), and to express the problem and need succinctly in order to persuade others to get fully on board (problem #3 – articulation).

Benchmarking in theory can achieve all of these, since it starts with data that enable comparison with other organizations.  As we have written elsewhere, benchmarking has been held back because it’s applied often in cases where data availability is a problem, and because the lack of automation leads to high costs, uncertain outcomes, and the biases that are necessarily introduced by the manual methods that are employed.

Benchmarking engines are the way forward, especially when they lead to concise, specific insights, expressed well, on a large variety of dimensions that can be addressed departmentally, not just organization-wide.  These insights are a spur to action – the action of deciding whether something is a practically addressable issue that should be a near- or mid-term priority, and then doing something about it.  The issue could be either a problem or a cause for praise, and the actions can improve on the problems or lead to copying the practices that resulted in the good outcomes revealed by benchmarking.

Organizations need to focus on practically soluble issues that can be set up for action and can be articulated persuasively to people.

Raul Valdes-Perez

Why I joined OnlyBoth

People who know me have been asking why I joined OnlyBoth – an early stage technology startup. That’s quickly followed by the question: what is OnlyBoth anyway? Fair questions.

First, the “what” question.  Founded by entrepreneurs Raul Valdes-Perez and Andre Lessa, OnlyBoth is the pioneer of artificial intelligence-based benchmarking software. Its fusion of proprietary artificial intelligence and natural language generation technology enables companies in a variety of industries to automatically discover critical business insights from data, and to communicate these in plain English.

Now the “why” question.  There are actually five reasons, ranging from the lofty to the practical.  Let me explain.

Reason One: Impact.  I see this as an opportunity to make a huge impact – on customers, industry, and society, as well as on OnlyBoth’s employees and partners.  The bigger the impact, the more energized I get.  We’re addressing a pervasive need – to know how someone or something is doing in comparison to others. Capitalizing on the combination of today’s Big Data and OnlyBoth’s software, uncovering comparative insights and triggering business improvements is cheaper, simpler and more convenient than ever. It automates a process, which has been, until now, largely based on expensive and scarce talent. And it has a potentially strong position in a large, attractive market – key prerequisites for the most successful products.  OnlyBoth’s unique technology was years in the making, so it won’t be quick or easy for anyone else to create an alternative that delivers better results. So I see OnlyBoth’s technology as disruptive – in a good way.

Reason Two: Startup.  The company entered the market this summer. I joined at the end of July, and prior to this, I was occasionally advising and helping the co-founders discover the market problem/opportunity. For me, joining a company when it first gets started is the ideal time to learn and grow together. I’m at my best building stuff from scratch into something of value.  I’ve helped to build new businesses, new product lines, new product categories, new customers and new skills for four previous startups, as well as three multinational corporations and Carnegie Mellon University. OnlyBoth provides an opportunity to work on all of these.  So the timing on this one was nearly perfect.

Reason Three: Values and Respect.  I respect a lot of entrepreneurs and business people, but I haven’t always been at ease with their values.  However, Raul, OnlyBoth’s CEO, is someone I believe I can work with comfortably for a long time.  That’s important because it can take years to build something new that makes a real impact. A company’s CEO, more than anyone else, shapes the culture, standards and identity of an organization. Raul is very keen on doing what’s right, whether it’s for customers, employees or other stakeholders. I’ve found Raul to be straightforward and open.  He has integrity, values loyalty, and has a strong customer-value orientation.  He stays focused, seeks input from others, gives people freedom to use their talents, and delivers on commitments.  I share those values; they give me confidence we can work effectively together.  And besides, experience matters. Raul has made this journey before.  IBM acquired his first software startup twelve years after it was founded.

Reason Four. Finances. As a result of IBM acquiring their previous business, the founders are in a position to finance and scale OnlyBoth by themselves. Although it’s an attractive opportunity for investment capital, and VCs are actively funding other projects in the artificial intelligence and natural language technology spaces, we can devote our energies to solving customer problems, creating value with our products, and building a solid business, rather than pitching it to potential investors. Even better, we’re not dealing with embryonic technology.  The research and development of OnlyBoth’s technology started back in the late 1990s with the support of a National Science Foundation grant, and it’s now ready for market.

Reason Five. Responsibilities.  I’m very excited about my new role as Chief Customer Officer.  I’m leading the development of our customer base and making sure our customers get the maximum value from those products and from our relationship. But in the early stages of a startup, flexibility is key, so my role could evolve and take on new dimensions. In just my first month, I’ve had insightful discussions with over 30 people from organizations in our target market.  I’m learning a lot about the customer problems, the options they have to solve them, their readiness for a new solution, and where our communications can be improved. The people I’ve met have been very generous and helpful, for which I’m extremely grateful.  It’s a great way to help build a solid foundation for business success.

That said, you’d think this would have been an easy decision for me.  But it wasn’t.  As you may know, I’d been itching to create another startup and I spent a lot of effort exploring technology solutions to a few business problems and evaluating a number of opportunities to commercialize technology originating from Carnegie Mellon University and the University of Pittsburgh.  I was actually getting very close to starting a tech business.

And then, to make my decision even more difficult, I was surprised and honored to be asked to teach at a prominent west coast university and to lead a tech company’s product organization.  I will continue teaching my strategic marketing and product management course at Carnegie Mellon, but I feel lucky that Raul asked me to join when he did.  Opportunities like this don’t come by often; I’m glad I didn’t miss it.

Jim Berardone