“Quality rankings” are often oxymoronic.
My local paper recently had a headline asking “Does your clinic measure up? Check Minnesota’s quality rankings.” The paper proceeded to report on data from an organization that “tracks healthcare quality and costs statewide,” offering rankings of clinics.
The report specifically cited two clinics that “kept 57 percent of adult patients with diabetes at optimal health.” Then came the key sentence: “patients meet this criterion if they have low blood sugar and blood pressure levels, refrain from smoking, and take aspirin and cholesterol-lowering drugs.”
The article continued by shaming two additional clinics that “only achieved a 20-percent success rate by comparison.”
Consider these ne’er-do-well clinics, where four out of five patients either failed to maintain low blood pressure or blood sugar levels, smoked, or failed to take aspirin and cholesterol-lowering drugs. Is the fact that a patient smokes the responsibility of their clinic? What about their failure to take aspirin? Their sweet tooth that results in a high blood sugar?
The notion that a clinic is responsible for activities of its patients is truly bizarre to me. It is reasonable to hold a clinic responsible for educating patients about the benefits of stopping smoking. But I am not familiar with a way in which a clinic can compel a patient to stop smoking.
In fact, the only way I know that a clinic can guarantee that none of its patients smoke would be to have no patients. Ranking clinics based on patient behavior is completely silly.
To be fair to the publishers of this study, the executive describing it did say that the measurement data isn’t intended as a way to compare clinics, because each has unique patient populations and challenges. He asserts that the goal is to give clinics data so they can identify weaknesses and find solutions in their practices.
That’s all well and good. But that is not how the data was being presented. We frequently have segments discussing the social determinants of health (SDoH).
It doesn’t require very much expertise to recognize that patients who face significant social challenges will have a lower likelihood of complying with all medical recommendations.
Ranking clinics based on patient compliance has to be among the most useless exercises possible, and likely harmful. If an organization actually worries about its rankings, it has a strong incentive to jettison non-compliant patients.
Rather than trying to bring patients into compliance, it is easier to send them away. A ranking that can be gamed by denying individuals healthcare isn’t a metric that’s likely to prove useful to anyone.
Many years ago, I talked about a quality ranking metric that was “keeping patients safe.” That metric didn’t focus on domestic violence or access to guns; it measured the percentage of safe and effective generic medications the clinic was using. It was a cost metric, misleadingly labelled as a quality one.
When it comes to quality metrics, it is best to assume that the metric itself may lack quality.