The engines at BenchMine.com, powered by Artificial Intelligence methods and principles of User Experience design, show these technology-enabled advances over the benchmarking status quo.
1. A user experience based on selecting questions to be answered and getting noteworthy insights as answers, rather than pushing lots of data and dashboards without a clear sense of what is being answered and what is noteworthy. The first-encounter UI poses these questions:
o How is this provider doing? (i.e., where does it stand out positively or neutrally?)
o Where could it improve? (where does it stand out negatively?)
o Where has it changed? (over the last year or two, what changes stand out?)
o What’s best in class? (what are top achievements on specific measures by similar providers?)
o Where does it stand in its county? (or other geography, based on scoring the insights found)
2. Insights are written as perfectly readable, and shareable, English sentences, rather than dashboards. This key novelty led to our trademarked “A sentence is worth 1,000 data.®” and addresses a problem identified in the National Academy of Medicine article Fostering Transparency in Outcomes, Quality, Safety, and Costs, that ”Research has demonstrated that many of the current public reports make it cognitively burdensome for the audience to understand the data.” We believe that dashboards are fine to alert that a warehouse is on fire or your car is nearly out of gas, but not to motivate thoughtful deliberations on performance improvement.
3. Calculating provider latitude & longitude, which enables benchmarking each provider against others nearby, e.g., within 20 or 50 miles, or other distances selected by the user.
4. Insights are supplemented with highly-related facts which help the user understand the significance or scope of the stand-out behavior or outcome. These addenda are also written in precise English.
5. Peer groups are not limited to the usual state, national, and perhaps a pre-defined cohort. Instead, the engine does a massive search for peer groups, expressed as a simple combination of data attributes, in which the benchmarked provider stands out. Geographic proximity can be one of these attributes, alone or with others.
6. An especially novel type of benchmarking insight involves aligning two numeric measures. One measure expresses the stand-out behavior, while the second forms the peer group, possibly in combination with symbolic attributes. For example, “In Texas, Park Plaza Hospital in Houston, TX has the lowest nurse-communication rating (2 stars) of the 88 hospitals with as high a doctor-communication rating (4 stars).”
7. By specifying any known algebraic relationships among measures, the engine can insert action-oriented remarks such as the one italicized in this nursing-homes insight (see it online): “Carroll Manor Nursing & Rehab in Washington, DC has the fewest total nurse staffing hours per resident per day (2.05) of all the 724 nursing homes that are located within a hospital. That 2.05 compares to an average of 4.8 across those 724 nursing homes. Reaching the average of 4.8 would imply an extra 80.9 nursing staff per day, assuming an 8-hour workday.”
8. Input data can be numeric, symbolic, yes/no, and even set-valued, which gives rise to innovative comparisons like this: “Of the 1,488 hospitals that have at least 4 stars as an overall hospital rating, Shasta Regional Medical Center in Redding, CA is one of just 2 that have a 1-star rating in each of cleanliness, communication about medicines, doctor communication, and quietness (4 total).”
9. As discussed in an AHRQ report on usage of hospital evaluation websites, consumers and healthcare professionals often need different content. So, we have introduced a “Switch Audience” toggle, visible to the user when an insight contains content that appeals to one but not the other, which lets users declare their roles. See the difference by switching the audience to “professional” at this insight on emergency-room wait times. and noticing the paragraph that begins with “Note that …”
10. The final novelty is automation, so that many provider measures can be assessed with the same (human) effort, addressing this point by Dr. Robert Brook: “… quality must be measured in a comprehensive way in order to motivate an institution or physician to provide high-quality care. […] if just a few measures are used to assess quality, the quality of care delivered across all patients in all diseases will be distorted, emphasizing those things that are being measured. Fortunately, we have many well-tested comprehensive quality of care measures that can help prevent this distortion.”