Proposed Interoperability Standards Measurement Framework Public Comments

The comment period for this document ended on 5 p.m. ET on Monday, July 31, 2017. Comments for this document will be posted shortly.

The document solicits feedback on a proposed industry-wide measurement framework for assessing the implementation and use of health care interoperability standards. Currently, stakeholders’ capabilities to measure and report on the use of standards vary significantly across the health IT ecosystem. This framework aims to help health IT developers, health information exchange organizations, and health care providers move towards a set of uniform measures to assess interoperability progress.

Please note that comments and recommendations submitted as part of this process will be made public by ONC.

Download the Measurement Framework

Sheree Speakman

Haven't read it yet.


Page 6 b, Òparticular versions of their products or services that are not already derivable from the ONC Certified Health IT Product List (CHPL)4.Ó
Why would there be a gap between standard in system and listed in CHPL, is this to track a certified release version / date list versus what may have been added during service updates over time within a specific release?

Page 6 c, ÒProduct version with standard implemented deployed to end users: Health IT developers should publicly report how many end users have deployed a product version or subscribed to a service with a standard implemented.Ó
Please clarify Client IDs will not be available in the data collection, this information could be miss used in a competitive market and reduce quality of data retrieved.

Page 6 Objective 2, Òcould result from a number of factors, including, but not limited to, insufficient education on how to use the functionality enabled by the standard, difficulty finding or using the functionality, or lack of other users with whom to exchange.Ó
Other factors, purposeful consumer administrative choice to delay, expense to consumer in updating, and training of system users. We frequently see clients upgrading but upgrading Like for Like and slowly implementing components over time. Providers have established workflows and behavior, changing system behavior or adding new requirements is disruptive. From the perspective of the consumer it would be valuable to determine what they have implemented, what they have not (and why). What is their implementation timeline for standards not in use. Understanding the context of use/non-use is critical. Do consumers see functional, clinical value in features/standards beyond the checking a box?

Page 7 a, ÒStandard used by end user: Health IT developers and exchange networks should publicly report what percentage of end users (who have access to a product version with a standard implemented) have actually used a particular standard.Ó
I contend IT Vendors have devoted considerable resources in meeting standards. There is valuable in knowing consumerÕs perspective on workflow changes that have come to pass based on standards. Consumers belief in how they have been impacted, positively and negatively. What features they have sought that have been delayed/not delivered r/t meeting standards. What standards have added value to the provision of care and what standards are seen as no or negative impact to provision of care.

Page 7 a, ÒThis will enable stakeholders to identify standards that have been deployed widely but not used by end users.Ó
End users and administration. We have seen Hospital IT and Administration control what information is available about features. E.g. Data Export. When introducing development underway, the best response from a client provider, "Why would you do that?" Not all features are deemed valuable and information about features is withheld from the end user.

Page 7 c, ÒLevel of conformance/customization of interoperability standards: Stakeholders have limited experience to date in measuring this area. As a result, additional foundational work is essential to identify the best approach(es) to track the conformance and customization of standards after implementation in the field. ONC requests stakeholder feedback on the best methods to measure this area.Ó
Consumer list of implemented features, identify those that they have requested vendor enhancements (Provided or not). Bonus would be knowing what the enhancements were. List un-implemented features, identify if unimplemented r/t need for required functional enhancements. From a providerÕs view, there is only one right way to do X. My Way. An overstatement but it is frequently true.

Page 8, table1 Obj 2, measurement a, ÒStandard used by end users in deployed systemsÓ
What is the level of acceptance of required standard functionality is seen as useful/valuable?

¥ Is a voluntary, industry-based measure reporting system the best means to implement this framework? What barriers might exist to a voluntary, industry-based measure reporting system, and what mechanisms or approaches could be considered to maximize this systemÕs value to stakeholders?
o Voluntary system will not net a true picture. The response will be low from Vendors with poor implementation and likely low from consumers who have poor implementation. If Mandatory reporting compliance from IT vendors and consumers were required what would it be tied to, to insure compliance? What benefit would the system consumer receive for what would be one more reporting requirement that given the nature of the information could be cumbersome to collect (end user provider information).
¥ Does the proposed measurement framework include the correct set of objectives, goals, and measurement areas to inform progress on whether the technical requirements are in place to support interoperability?
o Implementation compliance may be linked to the perspective that standards requires workflow/data capture as having low value in clinical context of patient care. Collect the providers view or standards impact to care.
¥ Would health IT developers, exchange networks, or other organizations who are data holders be able to monitor the implementation and use of measures outlined in the report? If not, what challenges might they face in developing and reporting on these measures?
o Concern r/t system monitoring, does monitored information impact PHI that could become available to monitoring. Does monitoring become invasive to the consumer? Instrumentation of software for monitoring can have a system impact.

¥ Ideally, the implementation and use of interoperability standards could be reported on an annual basis in order to inform the Interoperability Standards Advisory (ISA), which publishes a reference edition annually. Is reporting on the implementation and/or use of interoperability standards on an annual basis feasible? If not, what potential challenges exist to reporting annually? What would be a more viable frequency of measurement given these considerations?
o Consider the industry impact to vendors and consumers head count required for data collection/management. What value will either party derive from the exercise?
¥ Given that it will likely not be possible to apply the measurement framework to all available standards, what processes should be put in place to determine the standards that should be monitored?
o Standards required for MU certification. Of those which standards are critical to the objectives of interoperability and which have a positive impact on patient care and outcome.

Ryan Joyce

TBD

Karen Green

My organization represents Post Acute, and we are struggling with interoperability with transitions in care from acute. We are able to receive CCD's, but the acute care side is not sending them very timely or with the content we need. Meaningful use has established some good format and content standards for communicating patient information, and this is measured by quantity of transactions. However, but there is no measure of quality of the content or timeliness in applying the standard. I think lack of timeliness is the biggest barrier to adoption of standards. APIs and FHIR help to facilitate timelier exchange of information, but the triggers for these mechanisms have not been clearly defined so that the information flow is more timely. Post Acute who is at the tail end of the interoperability food chain, does not get electronic information soon enough to determine if there is medical necessity. We have to access EHR's or visit the patient face to face to get the information we need. We then assess whether the patient should go IRF, SNF, OP, or homecare. This takes several people and processes to accomplish. Interoperability should focus on improving transition in care, and allow for digital rounds to do a clinical assessment and support a clinical decision as to where the patient needs to go. The efficacy of standards being applied can be measured by the time it takes to transition a patient, reduce LOS, or communicate the information that it takes to navigate the patient to the right care setting. The number of days it takes to receive a document under the CCD architecture, by HISP or HIE, should be -1 date from discharge date. I would like to see # of days as a metric that would help foster adoption if a receiving organization could trust the standard CCD was timely and useful. Getting a CCD 2-3 days after discharge renders the information "meaningful useless', especially when we are trying to identify high risk patients that need extra attention as they come to our care settings.

rhys giron-Delahay

Any framework and standard guidelines are important

Tony Mallia

We all talk about interoperability Ð how interoperable systems are beneficial and how networks can be assembled from interoperating systems but in many cases we do not know when we have reached interoperability. That is partly because there appears to be no criteria against which we can judge achievement.
It has become apparent that there can be impediments to achieving interoperability in the specifications or standards themselves. Just because a specification is a standard does not guarantee interoperability. It is left to the implementers after the specification has been released to find the interoperability issues with the specification. The measurement framework is addressing the collection of this experience. This is late in the standard development process and may result in an inadequate specification being adopted for implementation by a community. We need a preventative approach to avoiding the deployment of non-interoperable specifications.
Another measurement in the framework should be the assessment of the standard against interoperability criteria developed in a Theory of Interoperability. This can be used as a quality mechanism against a draft standard to assess whether it will result in interoperable systems.
Initial work on the Theory is intended to be understandable by specification developers and does not cover the math or science of interoperability.

valita fredland

....

Paul Wilson

These responses are being submitted from The National Council for Prescription Drug Programs (NCPDP) as reviewed by The Education, Legislation and Regulations Task Group.

Responses:
1) Is a voluntary, industry-based measure reporting system the best means to implement this framework? What barriers might exist to a voluntary, industry-based measure reporting system, and what mechanisms or approaches could be considered to maximize this systemÕs value to stakeholders?
a. Although incentives may need to be offered to ensure a high-level of response, a voluntary reporting system appears to be the best option. Barriers include participation of the reporting entities and the standardization of what is being counted and measured.
b. NCPDP recommends ONC look at current models that exist for surveying the use of standards and interoperability. For example, the MN eHealth Initiative is one such entity that has been surveying eHealth adoption since 2004. Reviewing current models will allow for the data being collected to be aligned with that of existing reporting systems.
c. Proper communication of this effort, the benefits, and outcomes of collecting the data will be essential as many entities consider their data proprietary and confidential.
2) What other alternative mechanisms to reporting on the measurement framework should be considered (for example, ONC partnering with industry on an annual survey)?
a. Partnering with the industry on an Annual Survey is an option if the voluntary reporting survey from question 1 does not result in sufficient data or responses.
b. An alternative may be to mandate reporting through existing certification programs or appropriate regulation.
3) Does the proposed measurement framework include the correct set of objectives, goals, and measurement areas to inform progress on whether the technical requirements are in place to support interoperability?
a. NCPDP recommends ONC work with each Standards Development Organization (SDO) to determine objectives, goals and appropriate measurement areas.
b. It may not be appropriate to have product development plan details reported as a measurement, due to the proprietary nature of the vendor specific information.
4) What, if any gaps, exist in the proposed measurement framework?
a. A gap may exist if ONC is unable to successfully determine the denominator for all measures, e.g. all system vendors, all prescribers, etc.
b. Additionally, there appears to be a gap due to confusion about how the data will be used. Once the stakeholders clearly understand how the data will be used, the voluntary reporting will likely increase.
5) Are the appropriate stakeholders identified who can support collection of needed data? If not, who should be added?
a. Your potential list of stakeholders is a good start. ONC will need to identify and remove any redundancies in the data to ensure accurate measurement and reporting.
b. ONC may want to contact states (e.g. MN) or other organizations (e.g. eHealth Initiative) that are already collecting similar data for guidance.
6) Would health IT developers, exchange networks, or other organizations who are data holders be able to monitor the implementation and use of measures outlined in the report? If not, what challenges might they face in developing and reporting on these measures?
a. Developers may not have insight into when their customers are going to implement a specific version that includes the functionality ONC wants to track, or at least not enough detailed insight to track the implementation.
b. Measures should be reported based on which transactions and how many transactions are exchanged without involving the end user. Depending on the method of reporting transactions, reaching the end user may be very challenging.
7) Ideally, the implementation and use of interoperability standards could be reported on an annual basis in order to inform the Interoperability Standards Advisory (ISA), which publishes a reference edition annually. Is reporting on the implementation and/or use of interoperability standards on an annual basis feasible? If not, what potential challenges exist to reporting annually? What would be a more viable frequency of measurement given these considerations?
a. The volume of some transactions for a specific standard is currently tracked annually (e.g. MN and Surescripts National Progress Report). Reporting on an annual basis would depend on the complexity of the reporting requirements. Timing of the reporting would need to be considered as the timeline for the submission of these reports would be dependent on the requirements. Stakeholders may need to alter internal systems to ensure transactions are properly counted and reported, which could cause reporting delays. It is impossible to determine feasibility of any proposed deadlines until the requirements are known.
b. Links to some current industry reports are noted below:
i. Council for Affordable Quality Healthcare (CAQH): www.caqh.org/explorations/caqh-index/
ii. Cover My Meds: epascorecard.covermymeds.com/
iii. eHealth Initiative: www.ehidc.org/
iv. State of Minnesota: www.health.state.mn.us/e-health/assessment/index.html
v. Surescripts: http://surescripts.com/news-center/national-progress-report-2016/
8) Given that it will likely not be possible to apply the measurement framework to all available standards, what processes should be put in place to determine the standards that should be monitored?
a. NCPDP recommends ONC start with standards named in any Federal regulation. Additional standards may be considered in future phases, based on their status within the ONC Interoperability Standards Advisory.
b. ONC may want to consider prioritizing the SCRIPT standard transactions, which have a more direct impact to patient care and outcomes, (e.g. NEWRx, ePA) over those that address administrative and billing transactions or less critical exchanges of data.
9) How should ONC work with data holders to collaborate on the measures and address such questions as: How will standards be selected for measurement? How will measures be specified so that there is a common definition used by all data holders for consistent reporting?
a. NCPDP recommends ONC work with the SDOs to assist in identifying the standards to be reported and the stakeholders involved in the implementation of those standards.
b. ONC may also want to consider working with the identified stakeholders on specific measures as they apply to each standard.
c. The SDOs should assist in establishing the common measurement definitions for consistent reporting.
10) What measures should be used to track the level of ÒconformanceÓ with or customization of standards after implementation in the field?
a. NCPDP recommends ONC work with the SDOs, who likely have mechanisms for addressing non-conformance with their standard(s).

Dan Tran

1) Is a voluntary, industry-based measure reporting system the best means to implement this framework? What barriers might exist to a voluntary, industry-based measure reporting system, and what mechanisms or approaches could be considered to maximize this systemÕs value to stakeholders?

Minimizing the friction to leverage/integrate this framework will be key in ensuring high adoption rates, e.g. by having ONC certification test labs gather this information as a voluntary part of the certification process, or by having a web portal where you can send non-PHI data samples and standards will be automatically recorded to a central registry.

Additional incentives would also help to increase the likelihood of participation. For example, providing a digital badge that vendors can use that displays all of their supported ONC-verified interoperability standards.

Instead of implementing a voluntary system why not create an API that takes a C-CDA summary document which represents code system standards. EHR systems are currently expected to report quarterly to their certifying entity. As part of this process a summary C-CDA that contained only new codes, added since the last version of a coding system, could be mandated to be contained in the C-CDA summary sample. This would provider a clear indication of 1) the coding systems being used in the EHR and 2) the version of the coding systems being used.

---

2) What other alternative mechanisms to reporting on the measurement framework should be considered (for example, ONC partnering with industry on an annual survey)?

As mentioned above, having ONC proctors record this information during certification would reduce the burden on vendors. Having a web portal that vendors can use to update their supported standards (e.g. by uploading non-PHI data samples) would also be a potential option.

---

3) Does the proposed measurement framework include the correct set of objectives, goals, and measurement areas to inform progress on whether the technical requirements are in place to support interoperability?

The standards themselves have versions as well, and itÕs unclear how those will be recorded via the metrics.

---

4) What, if any gaps, exist in the proposed measurement framework?

Although the level of conformance is a very important statistic, it may be hard to measure. If this is (number of conforming systems / total number of systems), what is the definition of a system? Customizations on top of this add another level of complexity.

---

5) Are the appropriate stakeholders identified who can support collection of needed data? If not, who should be added?

Not sure what the involvement of ONC certification test labs can play in recording this information, but that might be another organic forum for collecting standards information.

---

6) Would health IT developers, exchange networks, or other organizations who are data holders be able to monitor the implementation and use of measures outlined in the report? If not, what challenges might they face in developing and reporting on these measures?

This depends on the specific tools and resources available to assist with the process. Clear guidance and having resources to assist with metric collection and reporting will make it easier for data holders to implement this framework.

---

7) Ideally, the implementation and use of interoperability standards could be reported on an annual basis in order to inform the Interoperability Standards Advisory (ISA), which publishes a reference edition annually. Is reporting on the implementation and/or use of interoperability standards on an annual basis feasible? If not, what potential challenges exist to reporting annually? What would be a more viable frequency of measurement given these considerations?

Having a website or live portal similar to CHPL would allow the standards to be reported on a rolling basis. Updates on things like Òpercentage of certified products that have leveraged the standards measurement frameworkÓ could then be included in the ISA publication.

---

8) Given that it will likely not be possible to apply the measurement framework to all available standards, what processes should be put in place to determine the standards that should be monitored?

An initial broad survey can be sent out to create a baseline of currently implemented standards. The results of that survey can be used to generate the initial list of standards. Additional standards can then be added at the ONCÕs discretion.

---

9) How should ONC work with data holders to collaborate on the measures and address such questions as: How will standards be selected for measurement? How will measures be specified so that there is a common definition used by all data holders for consistent reporting?

Direct communications and surveys (e.g. sending things out via mailing lists, emailing all products currently in CHPL, etc.) should help, as well as having open forums and meetings, etc.

---

10) What measures should be used to track the level of ÒconformanceÓ with or customization of standards after implementation in the field?

As mentioned above, these metrics will be challenging to measure and record in a meaningful way. An easy way to track customizations would be with a simple ÒDo you have customizations on top of the standardÓ instead of trying to have data holders report a percentage.

abdeljalil mekkaoui

PROGRAMME FILING