Life sciences companies have only in recent years begun to fully seize upon the opportunities to leverage their data in a systematic way to a wide range of drug development and patient care challenges. While they increasingly see the advantages of treating data as a strategic asset in clinical development, the one exception has been in discovery, where even in recent years labs have been using paper, spreadsheets, or legacy systems. This makes it difficult to leverage in vivo data across the entire R&D lifecycle and ultimately hinders the pace of decision-making needed to get therapies to market faster.
What many of us in the in vivo space recognize is that the design, execution, and management of in vivo studies is different to the rest of the development space. While human clinical trials are highly structured, an in vivo study can be done in hundreds of different ways, and often the information presented is highly unstructured, which makes it difficult to introduce AI tools or any type of advanced analytics.
By way of example of how manual data practices are today, several years ago a respected scientific institute came to us after a flood in the vivarium, during which they had lost all its data because it was on color-coded Post-it® notes affected by the humidity from the flood. The lab wanted help to prevent a recurrence and wanted IoT sensors to help predict if a flood was likely to occur. What they really should have been considering was putting their data on something more reliable in order to enable IoT sensors to provide those alerts. The first step, therefore, is to put the data into a digital format to support the framework for technologies such as AI.
While paper – and, remarkably, even Post-it notes – are still used, the in vivo space is starting to see more document automation as scientists seek to free up their creative space. “Some labs are moving toward generating structured machine-readable data, which requires the use of appropriate automation tools,” Buchanan said. “So, I think there’s agreement that AI is not some magical force … It’s just another tool – an impressive tool, but one that needs to be designed, developed, deployed, and reoptimized in a way that works for the particular use of a particular group.”
The next step, according to the panelists, will be to overlay IoT sensors that run machine learning algorithms to bring in more predictive models, which can support better research. The purpose of the technology should be to allow scientists and other end users to use what works best for them in a study. The best way to get labs away from archaic processes, such as spreadsheets or paper, is to provide usable solutions. Understandably, there’s very little tolerance for technology that is not as fast or efficient as what people in the lab are currently using. The purpose should always be to enable scientists in the lab to spend more time running experiments, and less on inefficient and, frankly, unreliable processes.
While the move to technology has to support the user, it’s important that the in vivo space catch up with the rest of R&D. You would be hard pressed to find many people on the clinical side using paper. That presents a challenge because you need to have continuity of data through the pipeline.
The good news is that we are starting to see the in vivo space adopt digital tools, or at least establish a framework to enable data to be captured in a way that it can be reliably reused. Ultimately, what matters is not the whizz-bang technology but understanding the issue that needs to be solved and solving that problem, with the support of technology.