The most crucial objective of any major IT transition is outstanding adoption. This is especially true in the conversion or major upgrade of an EHR platform. After spending many millions of dollars and throwing the entire organization into disruption for a year or more, the desired payoff of improved results is only possible when users fully embrace the new system. Merely rolling out the new technology, while necessary, doesn’t qualify as success.
It is understandable, therefore, that the Boards of healthcare organizations facing major IT transitions ask the logical question, “How will we know when we’ve succeeded?” That’s a simple question without a simple answer. It is inspired by the highly publicized ‘nightmare scenarios’ throughout healthcare of organizations at best failing to achieve desired results (and at worst severely damaging their ability to operate) after making massive investments in new EHR platforms.
A realistic and constructive response to the Board’s question must be rooted in the perhaps subtle yet pivotal distinction between usage and adoption. Usage is the ante in the IT transition game. It is necessary but not suf cient as a measure of success. After all, what choice do users have but to use the now ‘only game in town’ system?
On the other hand, true adoption may be more about psyche and emotions than tactics. It generally is the extent to which users, particularly clinicians and physicians, fully embrace the new system in a manner that leads to the behavioral changes behind improved results.
Usage and adoption are a duality. Both are essential in that neither alone can lead to success. They are the intelligence quotient (IQ) and emotional quotient (EQ) of success. Or perhaps the Yin and Yang. Pick your favorite analogy. The point is usage and adoption are not the same thing and the answer to the Board’s question must recognize their duality.
Usage: Successfully accessing and using the system to complete necessary activities.
Adoption: Making the new system ‘your own’ to improve patient care and achieve key results.
THE LIMITATIONS OF EHR USAGE DATA
Having worked arm-in-arm with users through many EHR transitions, the scenario is all too familiar: The IT project leader reports to the governance committee that ‘adoption’ has reached its anticipated go-live target, while up on the oors physicians are grumbling, slamming down their mouse, and vowing to take their patients elsewhere.
How can such a dangerous disconnect exist when so much is on the line? The IT project leader and others are relying on logs and information generated by the EHR or new system. In truth, the data and reports derived from an EHR platform centers on usage, not adoption. Therefore, it re ects only part of success and, in fact, may be telling a partial story at a pivotal time. Those logs and reports do tell us who logged in, when and for how long, what tasks were and weren’t accomplished, and generally the extent to which the users permeate the system. To be sure, that is valuable usage information.
However, what EHR information can’t tell you is perhaps much more valuable. It cannot convey the telltale symptoms of good versus poor adoption. How much help did the user need to accomplish these tasks? Is it taking so much time that the user may suffer “EHR burn out?” Are they accomplishing those tasks in the most expedient manner possible consistent with new work ows? What is their level of frustration? Are they improperly delegating actions to others out of that frustration? Are they using workarounds that will be detrimental to downstream activities or analytics? Do physicians see the new system as a reason to keep their patients there — or even bring more?
Perhaps most importantly, EHR information cannot convey the extent to which the new system capabilities are leading to improved decision-making through behavioral and work ow or process changes. Those missing changes were at least part of the justification for the massive investment in the first place. Without them, there can be no game-changing ROI.
A PRACTICAL APPROACH TO ADOPTION ANALYTICS
Now more than ever, there is a need for successful adoption of EHR platforms. We are seeing a “Physician Burn Out” epidemic that is directly attributable to the amount of work that EMRs have added to their daily load, with more time spent in EMRs than on direct patient care.
Nurses are losing time as well. For the first time in history there has been a significant shift in nursing time to indirect care time while direct patient care time (as a percentage of overall time) has decreased. It should be the opposite. Ultimately, this means that inef cient and dif cult-to-use EMRs are driving higher costs for health systems.
True adoption is all about behavioral change — embracing the new technology to make better decisions and establish improved processes. But the EHR itself is little help in measuring it. So how can an organization gauge adoption throughout a major IT transition?
LESSON LEARNED: BALANCING PRECISION WITH PROGRESS
When I was a green bean consultant with Booz Allen & Hamilton, I recall one meeting at a hospital that taught me a few leadership lessons that have served me well. I was in the ED conference room facilitating an increasingly heated debate between clinicians about methods used to derive a particular KPI measuring performance gains.
One side of the table insisted on a method that yielded an average of 3.1. The other side backed a somewhat different method, yielding 3.7. Finally, after much pounding of the table, a VP stood up and basically said, “What does it matter?! Both results stink compared to our target of . . So why don’t we use this energy to improve rather than measure?”
I learned then that 1) there is a profound difference between precision and being directionally correct, and 2) the cost of the former can be huge compared to its typically-small incremental bene t over the latter. In other words, match the level of detail with reasonable decision-making tolerances.
What does all this have to do with measuring adoption? Everything. Adoption differs meaningfully from usage in that it is all about behaviors which are inherently dif cult and subjective to measure. What is needed, therefore, is a practical approach to adoption analytics that is directionally correct (to the point where emerging problems are reliably agged) but not necessarily to decimal point detail.
Santa Rosa’s approach to adoption analytics during EHR go-lives, for example, seeks to strike the best possible balance between these often-competing forces of precision and practicality. Certainly, a comprehensive usage dashboard is established based on ‘best practices’ across numerous clients. Beyond that, however, during every shift in each zone (site/area), the activation Zone Leads complete a quick web-based survey that re ects their experienced judgment of how well and independently users are embracing the new technology. The basis of the survey is about 15 or so adoption metrics evaluated on a scale from “Needs Major Handholding” to “Entirely Self-Sufficient” with several gradients in between. In addition, the Zone Lead identi es and comments on any physicians struggling more than most.
This short adoption survey is completed in only about 5 minutes, and that data is automatically formatted and uploaded into a cloud-based analytics tool. Near real-time snapshots and time-based trends of adoption status of each zone (by shift) are prepared, rolled up by site as well as by individual adoption metric, and an aggregated view across the enterprise is prepared. These insights are sent to Santa Rosa’s Activation Adoption Team specialists who scrutinize both usage and adoption metrics side-by-side and share key findings with the client with a keen eye toward proactive interventions and targeted remediation, as illustrated to the right.
USING ADOPTION AS A MEASURE OF SUCCESS
Is this approach to adoption analytics precise or 100% objective? No, in that it relies on the highly-experienced yet subjective input from Zone Leads.
Is it efficient and effective at identifying emerging problems and gauging true adoption? Absolutely, since very few ‘real’ adoption problems slip through these cracks.
Confusing usage with adoption undermines any major IT transition and can derail any chance at achieving its promised ROI. Organizations relying exclusively on EHR usage logs and reports to measure ‘adoption’ and identify when, where and how to improve see only a limited part of the vital picture. Clearly recognizing the nature of true adoption, coupled with a practical approach to adoption analytics, may be an organization’s best and cheapest insurance for the success of these mission-critical projects.