There has been much discussion in recent years about the collection of data within clinical trials – are we collecting too many, or should we be collecting even more outcome measurements to allow us to ask more questions from each individual trial?
It’s logical to think that expanding outcome collection and sources can lead to more therapies, but does this additional effort speed up or slow down a clinical trial?
Trial data is changing
In the past, trial teams have collected data and stored it in structured databases that were often built specifically for that trial. As trials evolve, the way we collect outcomes is progressing too. In trials nowadays, it’s reasonable to expect data to come from multiple different sources; the electronic health record, mobile health apps, biomarker information, outcomes collected directly from participants at trial clinic visits – the list really does go on.
So, more data means more management, and therefore a slower clinical trial, and therefore a bad idea, right?
We need sophisticated technology
If bigger, more comprehensive datasets are made available to pharmaceutical companies, secondary analysis can take place with ease. This means that instead of asking what measures to collect and how to use it, researchers will be able to take the time to uncover the secrets held in the mass of information that has already been collected.
Technology is absolutely crucial to allow this to work. With structured data sources being mixed with unstructured data sources, researchers could literally spend years organising numbers before they even get to assessing what patterns are within it. That’s where tech comes in.
Using technology with advanced meta-data management capabilities that can handle this huge volume of information is a necessity as clinical trials evolve further.
Once we’ve got technology right, the potential is huge
Think about it – getting the technology in place to handle huge volumes of unstructured data could really change the way that the biopharmaceutical industry works.
We could eventually be looking at a piece of technology that is able to handle colossal amounts of real-world data. Routine analysis of real-world figures would flag up everything from which patients benefit from a drug most based on the genomic profile, to finding side-effects of new drugs quicker than ever before.
The prospect of combining multiple data types for better outcomes may seem time consuming, daunting and not worth the additional effort. Yes, it may slow down time to market, but combining these sources and making that information available throughout a study increases the probability of a drug reaching the market faster. In addition to this, the numbers can then be analysed after the study has finished, potentially increasing efficiency by reducing the need for further studies.