• Sven Jungmann

Is your real world evidence (RWE) good enough? Taking a page from the FDA’s playbook

Let’s talk seriously about real world evidence. As more medical data becomes available and interconnected, it’s tempting to look to this evidence as the missing puzzle piece to not only proving effectiveness and safety, but as inspiration for new digital health solutions. However, innovators should tread lightly—this last missing piece may not fit your puzzle.


For entrepreneurs monetizing products and/or data, understanding what is usable and valuable data (according to government regulations, pharmaceutical companies, research institutions, etc.) is essential. So, let’s break down quickly what we mean by RWE and RWD.

RWE and RWD

  • Real world evidence (RWE) is a type of clinical evidence (i.e., gathered from observation and treatment of actual patients) that offers information (insight, benefits, risks) of a solution or product based on real world data.

  • Real world data (RWD) is data concerning patient health status and care delivery, which can come from electronic health records (EHRs), claims, disease registries, self-reported data, and other sources.

As solution developers turn to RWD and RWE to identify new treatment approaches, care gaps signaling market opportunities, and to support clinical trial designs, it is critical to assess the quality because, to put it simply, the evidence might not be “good enough” for innovative or regulatory purposes.


For an introductory overview on assessing RWE, we turn to the U.S. Food and Drug Administration (FDA), which has extensively written regulatory guidance for medical devices and products. In 2018, they published a framework for regulatory purposes from which innovators can extract some basic guidance (whether or not their products eventually fall under FDA scrutiny).


There are two guiding principles for assessment of RWE: reliability and relevance.

  • Reliability concerns how data is collected and how it is checked. Real world data has its limitations, and this requires oversight.

  • Relevance has to do with whether the data is appropriate to the question being asked, whether it includes adjacent information like exposure, outcomes, and covariates, and “whether available analytic tools are sufficient to address each question of interest.”

Let’s see how these two concepts interact with common RWD sources to deliver RWE.

Electronic Health Records (EHRs) provide details of patient health through routine visits and hospitalizations. To use this data, however, EHRs must be checked for consistency, completeness, and trends or, to put it in the FDA’s term, reliability. Additionally, relevance of this data can be weakened by a lack of uniformity in documentation (due to differences between providers and healthcare systems), missing covariates (e.g., smoking, obesity, other health factors), and if analytic tools are unable to properly comprehend data (due to its unstructured or inconsistent nature).


In contrast, medical claims offer more structured data (e.g., in the USA, they generally utilize consistent, identifiable international codes for medical diagnoses). Ensuring reliability here requires quality control of these codes (their presence), and ensuring relevance requires a format compatible with analysis tools. However, medical claims suffer in both assessment areas because they generally only include significant health events (such as hospitalizations) and miss critical data about follow-up care, changes in or severity of chronic conditions, and other covariates.


Data from other countries is a valuable source of RWD, but its fitness for use can be limited by differences in healthcare systems. To ensure both reliability and relevance of this data, users must consider the variations in care delivery, how healthcare systems are organized, and how data is collected.


Patient registries yield a very niche source of RWD since they collect information from observational studies of populations with a particular disease or undergoing a particular treatment. This can be hugely relevant if your product is targeting a particular condition, but its reliability may require follow-up, efforts to minimize incomplete data, or other measures to assess data quality.


Does that puzzle piece fit?

Even if RWD collection methods are ideal, reliability is affected by differences in the documentarian’s priorities, and relevance is affected by the unstructured nature of healthcare’s varying documentation systems. So, how can you tell if your RWE is “good enough”?

  • Appropriate Quality: How is the RWD collected and how complete is it?

  • Appropriate Subject: How will the RWD help answer your question?

  • Appropriate Structure: Can the RWD be used with your analytic tools?

As RWE becomes increasingly prominent, gaps in relevance and reliability will hopefully be bridged by greater efforts to integrate currently non-interoperable healthcare systems, to include patient reported outcomes, and to unify patient care records (e.g., by using national identification health numbers). In the meantime, if you’re trying to finish your puzzle, I recommend you read the entire FDA framework to learn more about assessing real world evidence.

1 view0 comments