Eligibility Criteria, Screen Failures, and another RDF Success Story

It's T minus 6 weeks  (approximately) for the PhUSE 13th Annual Conference in Edinburgh, Scotland and I'm beaming with excitement. I'm involved in two study data projects using RDF and both are going very well. The first one, in collaboration with Tim Williams and an enthusiastic project team of PhUSE volunteers is called Clinical Trials Data in RDF, which among its various goals, will demonstrate how study data in RDF can be used to automatically generate highly standards conforming, submission quality SDTM datasets.

But it's the second paper that I want to discuss today. It's called "Managing Study Workflow Using the RDF." The paper is in pre-publication status so I can't share today, but I plan to post a copy here after the conference. I include the Abstract below.

In a nutshell, the paper describes how one can represent study activity start rules using SPIN (SPARQL Inference Notation), a type of RDF, to identify which study activity(-ies) are to be performed next based on what activities have already been performed. Well it turns out that the start rule for the Randomization activity in a typical double blind trial is in fact the Eligibility Criteria for the study. Here it is, in an executable form that, when combined with a standard COTS (commercial off the shelf) RDF inferencing engine can automatically determine eligibility. How cool is that?

A typical eligibility rule consists of multiple subrules all of which themselves must be TRUE for the overall rule to be true (e.g. AGE must be >=18 years AND RPR must be negative AND Pregnancy Test must be negative AND etc.); exclusion criteria can be negated and added as a subrule. The ontology also describes how to skip subrules that can logically be skipped (e.g. Pregnancy Test must be negative in a male subject). The end result is that identifying an Eligible Subject is automatic and performed simply by entering the screening test results in the knowledgebase. (Think of a knowledgebase as an RDF database).

Without going into the details (wait for the paper!), the rule points to all the screening activities that matter, checks each one for the expected outcome/result, and returns a TRUE or FALSE response if the conditions of the rule are or are not met. If the rule outcome is TRUE, the subject is eligible and the Randomization activity is enabled. If the rule is FALSE, then just the opposite. The paper describes the data from eight hypothetical subjects that were screened for a hypothetical study with just a few screening tests/criteria. The ontology correctly came up with the correct eligibility outcome for all eight.

But there is more....by adding a few more simple SPIN rules to the ontology, the inferencing engine can readily provide a list of all Screen Failures, and the tests that caused them to fail. It can also identify the tests that were logically skipped and therefore ignored for eligibility testing purposes. Do you want to determine which Screening Failure subjects received study medication? Another SPIN rule can do that too. The possibilities are quite exciting. It makes RDF, in my humble opinion, a strong candidate for representing clinical trial data during study conduct. No other standard that I know of supports this kind of automation "out of the box." in RDF, the model and the implementation of the model are the same!! And, once one is ready to submit, you press another button, and submission quality SDTM datasets are generated (which the first project I mentioned intends to demonstrate).

For more details, contact me, or wait until after the PhUSE meeting in October for the full paper.

A clinical study is fundamentally a collection of activities that are performed according to protocol-specified rules. It is not unusual for a single subject to undergo hundreds of study-related activities. Managing this workflow is a formidable challenge. The investigator must ensure that all activities are conducted at the right time and in the correct sequence, sometimes skipping activities that logically need not be done. It is not surprising that errors occur.

This paper explores the use of the Resource Description Framework (RDF) and related standards to automate the management of a study workflow. It describes how protocol information can be expressed in the RDF in a computable way, such that an information system can easily identify which activities have been performed, determine which activities should be performed next, and which can be logically skipped. The use of this approach has the potential to improve how studies are conducted, resulting in better compliance and better data.

No comments:

Post a Comment