A new study by Stefan Schulz and his colleagues on "The Pitfalls of Thesaurus Ontologization", however, suggests that the organization contracted to carry out this overhaul may in fact have made things worse. As Schulz et al. point out
These problems are not unique to the NCIt – they affect other major clinical terminology resources employing one or other description logic-based approach.more than 76,000 axioms in the OWL-DL version [of the NCIt] make incorrect assertions if interpreted according to description logics semantics. These axioms therefore constitute a huge source for unintended models, rendering most logic-based reasoning unreliable.
As we believe we have demonstrated, for example, in the OGMS (Ontology for General Medical Science) initiative, the use of OWL-DL is compatible with a realistic representation of a complex domain; but it can be achieved only through a painstaking analysis of the types of entities in the domain in question and of the relations between them. Translation of natural language assertions into OWL, at the superficial level of the sort that we find in the NCIt, leads too often to errors.
The HL7 organization itself, for good or ill, has not yet embraced the description logic-based approach, and is thus gratifyingly free of the sorts of errors referred to in the above. It does, however,exert a certain influence on the NCI, which extends also to the NCI Thesaurus.
Comment (November 18, 2010) from JL (eHealth software architect):
Well in the end, the fundamental problem in eHealth is the lack of market incentives to embrace efficiently working solutions. The (partially inevitable) regulation of the health domain by the state creates a specific space in which the tyranny of mediocrity thrives. Hospitals and GPs have incentives to not use good software. The use cases are so primitive and can be met with simple measures because physicians do not want sophisticated eHealth systems. Business domains with market forces in place (logistics, transportation) are embracing clever software ... it is so depressing.Comment (November 18, 2010) from Tom Beale:
This is bad. But there is likely a grain of truth in that statement, about corrections not changing the ability of NCIt to meet the uses cases. I know how this looks, especially to us cynics, but my observations of SNOMED CT/IHTSDO over the last 2 years (including going to committee, SIG meetings, etc.) is that there is almost no prospect of SNOMED CT being generally used for computational inferencing in a clinical or research environment in less than 5 years. The problem of the errors is almost secondary: the challenges of educating not just end users, but procurement people, software architects and developers, of getting terminology service interfaces agreed (i.e. CTS 2) and a myriad of other purely practical things mean that it could be a long time before all but the most progressive organisations start deploying anything like business intelligence apps or computerised clinical guidelines. I think nearly all computation for the short term will be done against IS-A relationships on ref sets carved out of SNOMED, removing specific errors on the way.
This is not necessarily all bad: it means there is a 5 year window to get SNOMED CT sorted out properly. The challenge is to come up with the right analysis and change programme to do this. 5 years will seem an amazingly and possibly unacceptably long time, but the evidence so far of uptake and engagement with complex technologies and standards in the e-health space is that 5 years is extremely realistic.Reply (November 19, 2010) from Barry Smith:
I agree with the views of JL and Beale, above, to the effect that there is little prospect of SNOMED CT, or NCIt, being generally used for computational inferencing in the immediate future. In my view, however, this makes the task of providing a strategy for coherent evolution of these artifacts even more important. Currently, we are witnessing a situation in which large clinical terminologies are subjected to regular and poorly coordinated revisions, resulting in uncertain value of the information that is annotated in their terms. To create the possibility for coherent evolution and gradually increasing value of these artifacts – of the sort that has the chance of motivating the needed investment in a more sophisticated computational infrastructure – we need to identify a growing set of principles of good practice in terminology development, and to ensure as early as possible that the terminologies in question are developed in such a way as to satisfy these principles. One such principle is, surely: freedom from logical error.
No comments:
Post a Comment