OnlyBoth benchmarks U.S. hospitals as both a public service and as a visible demonstration of the power of an automated Benchmarking Engine. This enables hospital stakeholders to instantly discover in perfect English how they’re doing, not compared to absolute standards or arbitrary peers, but to all peers and groups.
We launched our first version this summer. Today we relaunched our hospitals benchmarking engine based on fresh data and technical advances:
- updated Hospital Compare dataset from Medicare.gov, now on 4,803 hospitals
- new hospital attributes relating to hospital performance and geography
- better expression of key types of insights
- improved heuristics leading to more insights per hospital
- addition of data on hospital networks, enabling intra-network comparisons
1. We have refreshed the data in the hospital application based on a late-September data release at the Hospital Compare data download page. This new release also contains new hospital attributes, as discussed below.
2. Since geography is an important determinant of peer groups, we’ve added attributes that enabling grouping East Coast, Southern, and Western states. We’ve also added two new attributes from the updated Hospital Compare data that relate to deaths or unplanned readmission due to coronary artery bypass grafting (CABG) surgery, and five new attributes that express hospital-readmission ratios for various afflictions.
3. A key type of insight expresses how entities that are within an elite peer group fall short along some key dimension. For example, our recent Harvard Business Review article, which explains why benchmarking is done wrong and how to do it right, gives this example of Stanford Hospital:
None of the other 344 hospitals with as many patients who reported YES, they would definitely recommend the hospital (85%) as Stanford Hospital in Stanford, CA also has as few patients who reported that the area around their room was always quiet at night (41%). That is, among those 344 hospitals, it has the fewest patients who reported that the area around their room was always quiet at night.
As the saying goes, this was too clever by half. After considering feedback from surveying users, this insight now appears, with the refreshed data, like this:
Stanford Hospital in Stanford, CA has the fewest patients who reported that the area around their room was always quiet at night (40%) among the 811 hospitals with at least 80% of patients who reported YES, they would definitely recommend the hospital (Stanford Hospital is at 84%). That 40% compares to an average of 69.4% and standard deviation of 10.7% across the 811 hospitals.
Of course, this improvement affects thousands of insights, and millions in the future.
4. We’ve improved the heuristics that enable finding valuable needles within the huge haystack that results from taking multiple slices out of a dataset of half a million hospital attribute values. Our new Hospitals Benchmarking contains 522,142 insights, or around 109 insights per hospital, compared to the previous 101 per hospital. The key benchmarking question – Where can this hospital improve? – has seen a 4% increase in answers per hospital.
5. For a hospital-network executive, it’s valuable to benchmark individual hospitals against others in the network, especially because knowledge transfer of good practices can happen more easily when two entities have the same owner. We’ve added a parent attribute that for now includes four networks: UPMC, Kaiser Foundation, Texas Health Resources, and NYC Health and Hospitals. We’ll add other hospital networks over time.
We expect that this hospitals application, and the diffusion of benchmarking engines in general, will further the goal of enabling universal betterment through data-driven comparison with peers, greatly simplified in terms of human work, but greatly expanded in terms of action-provoking insights.