Thursday, March 31, 2011

The Rise and Fall of HL7

Interfaceware is a Toronto-based HL7 solutions provider whose customers include the CDC, Cerner, GE Medical Systems, IBM, Johns Hopkins Medical, the Mayo Foundation, MD Anderson Cancer Center, Mount Sinai Hospital, Partners Healthcare Systems, Philips, Quest Diagnostics,  the VA, and Welch Allyn.

At 2.57pm EDT today, March 31, 2011 -- on what will surely prove to be a historic day in the advance of healthcare information technology in the direction of reason and light -- Eliot Muir, founder and CEO of Interfaceware, posted the following comment, which I here reproduce in full:

The Rise and Fall of HL7

The title of this post might seem unusual might seem an unusual comment from what is supposed to be an HL7 middleware vendor. But times are changing and that is not where I see our future.

Standards do not exist in a vacuum. To be successful standards must address market needs and solve real problems so people can make or save money.  Writing code costs money. Less than 0.01% of code gets written for free. The majority of code is written by people that are being paid to solve problems with it.

There are plenty of standards which are not worth the paper they are printed on because are are not sufficiently useful or practical.

Complicated standards can be pushed for a while but ultimately markets reject them. Even governments will ultimately reject complicated standards, through a democratic correction process. Although they usually waste a fair amount of other people's money along the way.

So back to HL7. Why was it successful?

Version 2.X of HL7 solved a very big problem for many people in healthcare IT back in the 90's. It replaced a lot of adhoc data sharing mechanisms used in the industry at the time. It gave three points of value. Ironically the first point is one which is not even an official part of the standard.
  1. The so called "defacto LLP standard" defined a uniform way to transport HL7 over a TCP/IP socket - this meant vendors could write standard socket implementations to exchange data.
  2. The EDI format of HL7 with it's classic | and ^ separators meant vendors could write standard HL7 parsers.
  3. The HL7 definitions gave some good suggestions on places to look for data
And that is where the value stops.

It is a lie when a vendor tries to claim they are "HL7 compliant".

The term is meaningless.

The best any vendor can ever do is provide a stream of messages with fields that map adequately to most of the data from their application. HL7 interfaces always end up being a thin wrapper around the structure of the database of the application which feeds them. The standardization comes about because there are common ways of structuring a lot of the data. The pain comes from areas where it is unclear how to structure the data.

There are good reasons for the lack of "standard data models". Technology and society change which means data models must also be changed to best describe new data requirements. Medicine changes. New entrepreneurs come up with clever new solutions and invent ways of using data that improves on old models.

HL7 is working on creating the final solution for healthcare interoperability - the Reference Information Model (RIM) which underlies the structure of version 3 (v3) of HL7.

I think that effort is doomed to fail for these reasons:
1. There is no such thing a single optimal data model to serve all purposes. A formal data model is always going be a square peg going into a round hole. Some problems are best solved by small simple models. There are approximations which work for certain problems but are not valid for others. If there was a single solution to everything then one person would invent it and the rest of us would be out of work.
2. There is substantial academic criticism of RIM that points to the semantic inconsistency within the model itself.
3. It is creating complicated standards which are expensive to implement.
The only organisations spending money on v3 are governments and some big corporations like Oracle that based their health care transaction base (HTB) on it. Oracle salespeople can sell ice to eskimos but I have not heard a lot of great success stories for that product.

Now let us fast forward to what I think will become the future. JSON based web services over HTTPS. Let us look at the benefits:
1. HTTPS with authentication is analogous to LLP - only it comes with authentication and security baked in.
2. JSON - the simplest format imaginable with free parsers in every language and environment, including Javascript which is strategic as the language of the web.
3. JSON data names and values give good suggestions on places to look for data.
Hmmm. Notice something? The value is more than what HL7 offers. In fact a lot more since these are very mainstream technologies that extend far beyond just the healthcare market.

That is why I am not betting the future of my company on HL7. Our value was never really as an HL7 implementation tool. The value our tools provide is the wiggle room we provide for our customers to handle the incompatibilities that occur with real world data. The Iguana Translator is all about making it easy to grab data from anywhere – be it HL7, X12, XML, JSON, databases or web services and making it easy to munch, transform and consume that data.

That is the future I am betting on.

Eliot Muir - CEO of iNTERFACEWARE




Update April 5, 2011


A new comment on "The Rise and Fall of HL7" has appeared at http://blog.interfaceware.com/hl7/the-rise-and-fall-of-hl7/ responding to Elliot's assertion to the effect that "The only organisations spending money on v3 are governments and some big corporations like Oracle":



Eliot,
plenty of companies apart from Oracle are “spending money” on the V3 RIM. Here are just a few that are basing their strategy on it (and not just their interfaces):
Rik
As one might respond: this means that after more than ten years there are some 22 companies with RIM-based products. Are any of these products particularly impressive in nature? Have they demonstrated some exceptional power of the RIM?

16 comments:

Grahame Grieve said...

json == xml as far as I can tell. At least in effect. All the rest of
Elliot's comments are about technology. The real question
he doesn't comment on is about where shared semantics
come from. v2 did well because it gave people loose
semantics. It's a meta-standard, in other words. But it doesn't
scale - because people have loose semantics. And I note that
all actual uses of v3 have been as a meta-standard too

In a messaging world, looking backwards (the one I
live in 2 days a week), Elliot's comments sort of make
sense, and it probably would be a good idea for HL7 to
do a JSON v2 implementation. (I'll look into that)

In the future world - one were we start standardising
the way people think, and work, and start streamlining
clinical thinking to seriously allow decision support
systems to start *helping* clinicians - in that world
(which I live in 3 days a week, though we are only 0.1%
into that one), Elliot's comments are non-sensical.

thomasbeale said...

Grahame, the point is not about concrete sytntax or aesthetics, it is that messages can be constructed in a syntax like JSON with a structure that accurately mimics the intended data, i.e. what is being exported from a lab or EMR system... or... a standardised content model. And such a message can be far more easily processed than an HL7 message (particularly v3), which imposes its own idea of structure, obstructing the ability to faithfully transfer data from the sending system to the target. The paradigm was only ever useful many years ago in the early days of computing when there was no UML, no programming languages able to directly represent data structures.

Why HL7 opted for the same paradigm again with v3 is a complete mystery, but the outcome isn't: the only bit people want now is CDA, a single schema whose content can sort of be controlled by other models.

The future is not message structures imposing themselves on systems, it is message structures being generated as a downstream product from formal content models agreed by regions or segments of the industry.

bluehollow said...

"In the future world - one were we start standardising the way people think"

Let's hope not, then innovation will truly be dead. Although if one frequents the HL7 architecture workspaces the amount of group think there would certainly lead you to that conclusion.

Unknown said...

I disagree strongly with Graham Grieve, one of the main proponents of HL7v3 who is in a defensive position here. The analysis published by Eliot Muir is absolutely adequate. The solution with JSON and https is not, of course, and Mr. Muir, it is unfortunate that you are diluting your excelllent text with this [BTW Grahame, on a technical level, JSON is not at all the same as XML. XML is a document markup language that is unfortunately misused for data serialisation, while JSON is a proper serialisation format with much better parseability]. Muir's critique occurs at the business level, and he addresses the right issues. That Grahame is unable to acknowledge this critique is typical for the blindness, idiosyncrasy and self-isolation of a sect, because this is what HL7 has degenerated to. The HL7-Kool-Aid has gone stale, Grahame.

Rene Spronk said...

Eliot covers multiple issues in one post.

It is a marketing decision of whether on pitches ones product as an "HL7 interface engine" or as a "general interface enigne of healthcare". That decision, given that his main market is in North America, is probably more influenced by his experiences with HL7 v2 implementations than on anything else. His statements about v3 seem out of place when he discusses the repositioning/rebranding of his product.

As for v3, there's RIM and there's interoperability standards based on that RIM. I just attended a v3 software developers meeting in Washington (see http://bit.ly/9j4ong) where again we did see implementations of the RIM without necessarily using any RIM based interoperability standards. One can't make any bold statements right now about the success or failure of the RIM as a model, or in what kinds of projects the model appears to be suitable - the proof of the pudding is in the eating, and the tasting process has only recently started.

As for JSON: whether one serializes an object graph one way or the other is irrelevant (it may be faster, one may have other tools to process it) - the underlying semantics are the key aspect. In the v3 space, that means RIM based semantics; but the same statement would hold for any other healthcare reference model that's being used.

Grahame Grieve said...

Gee. Thanks for all the supportive comments.

I'm fully aware of the differences between JSON and XML at the technical level. And I agree with Jobst about that. But from the point of a view of a semantic standard, I don't care whether you want JSON or XML. It's all the same at that level.

My original point was simple: the future of HL7 isn't about syntax or technology, it's about semantics. Who defines them? Thomas's claim seems to be that now that there's better platform level infrastructure for defining things, there's no longer any benefit in commonly agreed semantics (though considering your track record and our many discussions over the years, I'm sure that I misunderstand).

Blue Hollow: I was careless. If we standardise how people think at one level, we free people to be creative at a different level. It seems to me that we desparately need to standardise some parts of clinical medicine so that we can automate it and allow people to contribute value at a higher level. This is *not* the same as killing innovation.

Jobst: I do believe that there is a place for common agreement about structures and semantics. I'm sure that I haven't yet seen the ideal solution. It's not about the Kool-aid. Perhaps I'm blind, but I don't see where semantics come into Elliot's solution.

bluehollow said...

Sorry Grahame,

I should have known that. I just don't know why HL7 has become so enamored in complexity and making things difficult. Look at the SAIF work that is going on there, seems like you have to write 8 documents to get anything at all done. Look two posts earlier in this blog on the caBIG assessment and read it. The same people that are pushing SAIF on HL7, pushed it on caBIG. The software people got so distracted by that and ISO data types it made it difficult to make software that was useful to real people. Things should only be as complex as they have to be. Someone needs to stand up to the HL7 management and tell them that before they destroy the brand. Unfortunately if you do this you will be branded as an unbeliever and since I am not a CEO of a company it is not really in the cards for me.

Barry Smith said...

See the response to Grahame Grieve's comment of 4/02/2011 from Barry Smith here: http://hl7-watch.blogspot.com/2011/04/fall-of-rim.html.

Grahame Grieve said...

Bluehollow:

In spite of the fact that I'm a listed contributer to SAIF and a member of the architecture board who own the document, I'm quite distressed by parts of the SAIF outcomes. In particular, we wrote a document that needs it's own implementation guide in order for use to use it ourselves. Shades of this: http://dilbert.com/strips/comic/1996-07-05/. But it's OMG who got to MOF level 5, so we're not the worst offenders ;-)

Which makes me think:

SAIF Document: a document that spawns other documents.

But there is lots of useful and good thinking in SAIF too.

bluehollow said...

Thanks Grahame,

I know there are a lot of smart people in HL7, certainly a lot smarter than a guy like me. But they need to remember whenever they make a standard somebody needs to march into their CIO's office and explain to him why this extra work is going to create a RIO in a reasonable time that can be explained up the food chain. I think this is where Fridsma got it really right with the ONC's Direct Project. From day one they had the requirement to design for the little guy. That is really paying off for them now that it is time for adoption and you don't have to hire 10 consultants to hook you up.

MickyD said...

I applaud Mr Muir. This statement sums it up for me -

"There is no such thing a single optimal data model to serve all purposes"

NEHTA is trying to do something with enforced V2.3.x compliance which is arguably un-achievable. CDA is just another specification that is trying to create a single contract for all things document related. This is perhaps no different to the logical undertaking of wrapping up Pathology, Diagnostic Imaging, or referrals into their respective schemas as HL7 has tried to do. HL7 works great when all endpoints agree on the contract as in hospitals but history has shown, particularly in New Zealand and here in Australia, that as soon as one wishes to interoperate with an external third party not not everyone speaks the same version of HL7 nor can they even agree on what a particular version should look like.

Consider, EDI Purchase Orders for a moment - it would be foolish to believe that one standard would work everywhere. One vendor may wish to include extra metadata for one thing. So logically it follows with e-health and history as I mentioned has shown this to be so. Even HL7 through its Z-segments allows you to specify logical "metadata" but the mere inclusion of these non-standard segments is enough to invalidate the schema verification process on the recipients endpoint.

E-health standards bodies as mentioned in the document could perhaps consider the value of data transformation between an organisations canonical schema and a third parties.

This logical practice is quite common in other fields of computing (basically anywhere you see XSD+XSLT) so it is a shame that some in some geographics that e-health has a reputation of being designed-by-committee.

Syed Muhammad Abidi said...

If we take semantic interoperability out of HL7 and leave this standard only for parsing and transport mechanism, then in my view there is no purpose of having HL7 Standard at all.

For example, can we let each vendor to come up with their own version of clinical discharge summary?

I do agree with Elliot's comment that vendors and healthcare facilities are slow in embrassing HL7 v3 standard. Slow adoption of HL7 v3 is a failure of HL7 organization.

Unknown said...

I do believe that the V3 group set off to find the perfect abstract. You set your scope too high I am afraid. If you take too long to release an easy to use fist release in short order, it will become antiquated and outdated before you even get a chance to release it. That is exactly what has happened. Time to go back and take V2 and move it from HL7 to the mainstream. I totally agree with Elliot.

Jeff Brandt said...

Funny I have been thinking the same thing. First v3 is very heavy, not suitable for the mobile environment. Second it attempts to be all things to everyone, a difficult task. One of the advantages to XML is that you can read it. Not v3, lots of numbers, long strings of numbers. To indicate that a patient has no allergies the section is 1530 bytes long. Time to refractor.

HL7 is trying with FHIR but it uses CDA headers.... Supporting Legacy system is difficult and costly. We may want to rethink this strategy.

Jeff Brandt

Allison said...

Grahame wins long live HL7!!! @hl7interfaces :)

Unknown said...

Graehme: You are completely correct. XML=JSON with regards to just being syntax / containers. They are devoid of data models and semantics.


HL7 - whether it is serialized with its own pipes, or by XML brackes, or by JSON curly braces - is all about containers and syntax.

HL7, JSON, XML all are serializations. Putting "x" inside of pipes, brackets, or curly braces does not magically give it meaning.

The only emerging standard that starts to address semantics at web-scale is W3C standards such as Linked Data model (JSON-LD is the most programmer-friendly form of RDF; there are some ugly serizlization too though).

Much work has started in the RDF space, and it is gratifying to see that HL7 organization has now a working group to represent at the lowest level its models (even its now and 'cool' FHIR model - which is just XML or JSON) also in RDF.


To believe that health data needs to be in its own "healthcare-special" containers is to believe that healthcare data is categorically different from all other kinds of data. Here's a secret: its not.

Forget about semantics and granular, metadata-tagging approach - as recommended by the PCAST report - and miss out on the world wide web of interlinked data.

Keep moving into the semantic web space and you will be relevant. Glad to see that you and the HL7 organization is now drinking the more generic W3C RDF kool aid :)